Skip to content

Commit

Permalink
Merge pull request #6349 from EnterpriseDB/release/2024-12-12a
Browse files Browse the repository at this point in the history
Release: 2024-12-12a
  • Loading branch information
gvasquezvargas authored Dec 12, 2024
2 parents 3f91fd4 + fb53637 commit a0dbcf8
Show file tree
Hide file tree
Showing 24 changed files with 526 additions and 168 deletions.
57 changes: 57 additions & 0 deletions .github/workflows/generate-release-notes.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
name: generate release notes
on:
pull_request:
types: [opened, synchronize]
paths:
- "**/src/*.yml"
- "**/src/*.yaml"
jobs:
release-notes:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.ref }}
path: content
sparse-checkout: |
advocacy_docs
product_docs
- name: Checkout relgen tool
uses: actions/checkout@v4
with:
ref: develop
path: tools
sparse-checkout: |
tools
- name: setup node
uses: actions/setup-node@v4

- name: install dependencies
run: npm --prefix ./tools/tools/automation/generators/relgen ci

# TODO: limit this to paths that have actually *changed*
- name: regenerate relnotes
run: |
shopt -s globstar
for rnmetapath in ./content/**/src/meta.yml; do
./tools/tools/automation/generators/relgen/relgen.js -p ${rnmetapath%/src/meta.yml}
done
- name: check for modified files
id: changes
run: |
cd ./content
echo "files=`git ls-files --other --modified --exclude-standard | wc -l`" >> $GITHUB_OUTPUT
- name: commit modified files
if: steps.changes.outputs.files > 0
run: |
cd ./content
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add .
git commit -m "update generated release notes"
git push
70 changes: 57 additions & 13 deletions .github/workflows/sync-and-process-files.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,55 +2,99 @@ name: sync and process files from another repo
on:
repository_dispatch:
types: [sync-to-docs]
workflow_dispatch:
inputs:
repo:
description: Repository to source documentation from
required: true
type: string
ref:
description: Ref name in the source repo
required: true
type: string
sha:
description: SHA in the source repo, should correspond to ref
required: true
type: string

jobs:
sync-and-process-files:
permissions:
contents: write
pull-requests: write
env:
SOURCE_REPO: ${{ github.event.client_payload.repo || inputs.repo }}
SOURCE_REF: ${{ github.event.client_payload.ref || inputs.ref }}
SOURCE_SHA: ${{ github.event.client_payload.sha || inputs.sha }}

# The body text of the PR requests that will be created
BODY: "Automated changes to pull in and process updates from repo: ${{ github.event.client_payload.repo }} ref: ${{ github.event.client_payload.ref }}"
BODY: |
Automated changes to pull in and process updates from repo: ${{ github.event.client_payload.repo || inputs.repo }} ref: ${{ github.event.client_payload.ref || inputs.ref }}
# The name of the branch that will be created
BRANCH_NAME: automatic_docs_update/repo_${{ github.event.client_payload.repo }}/ref_${{ github.event.client_payload.ref }}
## Reviewing
- Look for formatting that may not work as intended
- Watch out for local changes (factual corrections, copy edits, link fixes) that may be overwritten
- You may need to resolve conflicts before merging - check the upstream repo for context when this isn't obvious
# The users that should be assigned to the PR as a comma separated list of github usernames.
REVIEWERS:
# The name of the branch that will be created
BRANCH_NAME: automatic_docs_update/repo_${{ github.event.client_payload.repo || inputs.repo }}/ref_${{ github.event.client_payload.ref || inputs.ref }}

# The title of the PR request that will be created
TITLE: "Process changes to docs from: repo: ${{ github.event.client_payload.repo }} ref: ${{ github.event.client_payload.ref }}"
TITLE: "Process changes to docs from: repo: ${{ github.event.client_payload.repo || inputs.repo }} ref: ${{ github.event.client_payload.ref || inputs.ref }}"

runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: Check inputs
if: ${{ !env.SOURCE_REPO || !env.SOURCE_REF || !env.SOURCE_SHA }}
run: |
echo "::error title=missing inputs::must provide source repo, source ref and source SHA"
exit 1
- name: Checkout destination
uses: actions/checkout@v4
with:
path: destination
lfs: true

- name: Checkout source repo
uses: actions/checkout@v4
with:
ref: ${{ github.event.client_payload.sha }}
repository: ${{ github.event.client_payload.repo }}
ref: ${{ env.SOURCE_SHA }}
repository: ${{ env.SOURCE_REPO }}
token: ${{ secrets.SYNC_FILES_TOKEN }}
path: source

- name: setup node
uses: actions/setup-node@v4
with:
node-version: "14"
node-version: "18"

- name: update npm
run: npm install -g npm@7
run: npm install -g npm@10

- name: Process changes
run: ${{ github.workspace }}/destination/scripts/source/dispatch_product.py ${{ github.event.client_payload.repo }} ${{ github.workspace }}
id: changes
run: ${{ github.workspace }}/destination/scripts/source/dispatch_product.py ${{env.SOURCE_REPO }} ${{ github.workspace }}
working-directory: source

- name: Update PR body
if: ${{ steps.changes.outputs.new-tag }}
run: |
echo 'BODY<<EOF' >> $GITHUB_ENV
echo "$BODY" >> $GITHUB_ENV
echo '## After merging' >> $GITHUB_ENV
echo 'Create a tag named `${{ steps.changes.outputs.new-tag }}` that points to the merge commit' >> $GITHUB_ENV
echo 'EOF' >> $GITHUB_ENV
- name: Create pull request
if: ${{ !env.ACT }}
uses: peter-evans/create-pull-request@v6
with:
body: ${{ env.BODY }}
branch: ${{ env.BRANCH_NAME }}
base: develop
path: destination/
reviewers: ${{ env.REVIEWERS }}
title: ${{ env.TITLE }}
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: "Sync ${{ env.SOURCE_REPO }} ${{ steps.changes.outputs.new-tag }}"

Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
---
title: "EDB Postgres AI Q4 2024 release highlights"
navTitle: Q4 2024 release highlights
description: The latest features released and updated in EDB Postgres AI.
date: 2024-12-10
---

Date: **December 10, 2024**

This [release roundup](https://www.enterprisedb.com/blog/solving-enterprise-generative-ai-and-analytics-challenges-zooming-our-q4-2024-release) originally appeared on the EDB blog.

## Introducing the EDB Postgres AI Software Deployment: cloud agility, on your terms

### **Enable cloud agility and AI sovereignty for critical data infrastructure –  anywhere, any environment.**

Earlier today, we [announced](https://www.enterprisedb.com/news/edb-brings-cloud-agility-and-observability-hybrid-environments-sovereign-control) major updates to the [EDB Postgres AI](https://www.enterprisedb.com/products/edb-postgres-ai) sovereign data and AI platform. In the wake of data and AI becoming increasingly important to business innovation, our customers have asked us for more flexible solutions that offer both agility and control. 

In response, we’ve launched a number of new generally available and preview capabilities to help accelerate deployment of EDB Postgres AI in [sovereign](https://www.enterprisedb.com/use-case/sovereign-ai), hybrid environments as an [omni-data platform](https://www.enterprisedb.com/use-case/omni-data-platform) that works across your enterprise’s data corpus to drive faster time to market for data-driven applications. With the new [EDB Postgres AI Software Deployment](https://www.enterprisedb.com/products/software-deployment), you can deploy, manage, scale, and observe mission critical data infrastructure in any self-managed, hybrid, or public cloud environment.

The single container-driven software installation enables the consolidation of structured and unstructured data in a single multi-model data platform to accelerate transactional, analytical, and AI workloads. The Software Deployment unlocks a number of new capabilities:

1. **Hybrid Control Plane**, enabling a hybrid database-as-a-service (DBaaS) with Kubernetes-driven automation and advanced observability across 200+ metrics to enable a cloud-like experience – even in your private data center.
2. **Analytics Accelerator**, which unlocks rapid analytics across unified business data in Postgres, powering 30x faster query performance and improving cost efficiency.
3. **AI Accelerator**, the fastest way to test and launch enterprise generative AI (GenAI) applications like chatbots and recommendation engines, so you can build cutting-edge GenAI functionality with just 5 lines of familiar SQL code (rather than 130+ using standard approaches).

To continue supporting our customers’ requirements as they evolve with their growing transactional workloads, we’ve also released enhancements to our **transactional database server software** and tooling, including **EDB Postgres 17** to meet the demands of modern workloads and the **EDB Software Bill of Materials**, offering visibility into your secure open source supply chain. 

Today, these **transactional database enhancements** are **generally available**, along with the **AI Accelerator**. The **Hybrid Control Plane** and **Analytics Accelerator** are **now available for preview** through a [concierge demo experience](https://www.enterprisedb.com/engage).

## What’s in Preview? Unlock Cloud Scale And Rapid Analytics In Hybrid Environments 

### **Hybrid Control Plane**

_Automation, single pane of glass management, and observability across hybrid data estates._

Modern enterprises manage data across multiple clouds and on-premises deployments. The undifferentiated heavy lifting of database administration often distracts operators and engineers from more value-oriented work, like improving app scalability and accelerating time to market for data initiatives. While public cloud Database-as-a-Service (DBaaS) offerings provide automation of administrative tasks, they require tradeoffs on control, data sovereignty and deployment flexibility. 

The **Hybrid Control Plane** is a **centralized management and automation solution** for the EDB Postgres AI Software Deployment, providing cloud automation and agility in a self-hosted environment. It boosts productivity up to 30% by automating time-consuming and expensive administrative functions like backups, provisioning, and point-in-time recovery – enabling a [hybrid (DBaaS) experience](https://www.enterprisedb.com/use-case/hybrid-dbaas). Monitor, observe, and respond to issues in real-time with visibility into 200+ metrics, keeping databases secure and enabling up to 99.999% availability. Plus, with built-in query diagnostics, you can identify problems and bottlenecks up to 5x faster and accelerate application performance up to 8x. 

See a demo of the Hybrid Control Plane in action!

<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1018060043?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Hybrid DBaaS: Cluster Upgrade, Workload Testing, and Failover Simulation with EDB"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>


### **Analytics Accelerator**

_Unify transactional and analytical workloads in Postgres with lower cost, faster performance, and simpler operations._ 

Scaling analytics workloads is crucial for modern enterprises that deal with high volumes of data and demand for rapid insights. Running these analytics queries directly on transactional data requires teams to spend significant time on data management and can degrade operational performance and slow down time-to-insights.

EDB’s [**Analytics Accelerator**](https://www.enterprisedb.com/products/analytics-accelerator) leverages lakehouse ecosystem integration and a Vectorized Query Engine so you can use SQL to query columnar data in external object storage. This allows you to run complex analytical queries across core business data with no lag on existing transactional workloads — 30x faster than standard Postgres.

It also supports Tiered Tables functionality, ensuring optimal performance by automatically offloading cold data to columnar tables in object storage, reducing overall storage costs with 18x more cost efficiency and simplifying the process of managing analytics over multiple data tiers. 

Watch a demo to see how to add an analytics node, sync data, and integrate with Databricks, improving insights without sacrificing performance.

<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1018059957?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Improve Analytics Insight Without Sacrificing Performance"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>


### **EDB Data Migration Service (DMS) and Data Sync**

_Accelerate seamless migrations to break free from legacy constraints and innovate faster._

Today, organizations want to break free from legacy systems to tackle next-gen application development, which requires diverse data models and open standards that integrate with modern data stacks.

[Modernizing](https://www.enterprisedb.com/use-case/modernize-legacy-applications) from legacy systems to EDB Postgres AI unlocks rapid innovation and growth for enterprises enabling seamless migrations to enterprise-grade PostgreSQL. The [**EDB Data Migration Service (DMS)**](https://www.enterprisedb.com/docs/edb-postgres-ai/migration-etl/data-migration-service/) **and Data Sync** enable a secure and fault-tolerant way to migrate Oracle and Postgres data from on-premises and cloud environments into EDB Postgres AI. This enables organizations with strict security compliance and data privacy needs to utilize EDB’s migration capabilities in their own environments. **EDB's Oracle Estate Migration Assessments** also make it easier to get a quick understanding of the complexity and level of effort required to migrate their Oracle databases to Postgres.

Learn more about [Oracle compatibility](https://www.enterprisedb.com/products/edb-postgres-advanced-server) enhancements and how EDB Postgres AI unlocks rapid innovation and growth for enterprises undergoing modernization of their legacy data infrastructure.

## Generally Available Today – Enhanced AI and Transactional Workloads

### **AI Accelerator** 

_The fastest way to test and launch enterprise generative AI (GenAI) applications_

Postgres users can already use the open source pgvector extension for foundational vector data support. This is powerful on its own but still requires developers to do a lot of manual work to create data pipelines, select embedding models, and keep embeddings up to date to avoid data staleness. 

The [**AI Accelerator**](https://www.enterprisedb.com/products/ai-accelerator) provides the fastest way to test and launch multi-modal enterprise GenAI applications with the powerful EDB Pipelines extension, which is preloaded with pgvector and advanced AI workflow functionality like managed pipelines and automatic embedding generation. This enables customers to get GenAI apps to market faster with out-of-the-box vector data capabilities, less custom code, lower maintenance, and fewer application integration efforts. Now, developers can build complex GenAI functionality using SQL commands in the familiar Postgres environment—with just 5 lines of code instead of 130+.

You can also transform your Postgres database into a powerful GenAI semantic search engine that’s [4.22x faster](https://www.confident-ai.com/blog/why-we-replaced-pinecone-with-pgvector) than other purpose-built vector databases. Want to see this in real time? Check out this demo of a GenAI application that provides quick, accurate recommendations based on text or image searches. The AI Accelerator is generally available today – [get started here](https://enterprisedb.com/docs/purl/aidb/gettingstarted).

<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1018059901?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Bring AI Models to Your Postgres Data"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>


### **EDB Postgres 17**

_Use PostgreSQL to meet the demands of modern workloads_

The recent [PostgreSQL 17 release](https://www.enterprisedb.com/news/edb-contributions-postgresqlr-17-help-enterprises-unlock-greater-performance-complex-workloads) equipped users with backup and recovery, JSON enhancements, and performance improvements to support modern database operations. EDB was a key contributor to these Postgres enhancements and we’re excited to make these community features generally available on EDB Postgres AI transactional database, tools, and extensions. [EDB Postgres Advanced Server](https://www.enterprisedb.com/products/edb-postgres-advanced-server) (EPAS), [EDB Postgres Extended](https://www.enterprisedb.com/products/edb-postgres-extended) (PGE) Server. 

These releases are generally available and ready for download. Visit EDB docs for detailed [EPAS](https://www.enterprisedb.com/docs/epas/latest/) and [PGE](https://www.enterprisedb.com/docs/pge/latest/) Release Notes or [check out this blog](https://www.enterprisedb.com/blog/edb-postgresr-17-transactional-database-highlights) for a recap of what’s new in EDB Postgres 17. 

### **EDB Software Bill of Materials**

_Build with open source confidently and ensure security and Postgres compliance readiness_ 

Enterprises today must ensure that customer data is protected and access to databases is controlled. While open source software (OSS) deployments can provide cost benefits, allow flexibility, and enable rapid innovation, they also introduce a challenge in identifying and mitigating potential security vulnerabilities. Today, the **EDB Software Bill of Materials (SBOM)** is available for **EDB Postgres Advanced Server** and **EDB Postgres Distributed** software packages through the [**EDB Trust Center**](https://trust.enterprisedb.com/?itemName=continuous_monitoring&source=click). It offers visibility into your open source supply chain with a detailed inventory of components and dependencies that comprise the, including up-to-date license reporting.

By enabling you to easily identify potential security vulnerabilities, you can ensure [secure open source software](https://www.enterprisedb.com/use-case/secure-oss) and mitigate risk and reduce your attack surface as you invest in open source. [Learn more about securing your open source software](https://www.enterprisedb.com/blog/edb-announces-secure-open-software-solution-edb-postgres-air-enterprise-and-government).

### **That’s a wrap!** 

To learn more about EDB Postgres AI Software Deployment, read more and register for the preview experience [here](https://www.enterprisedb.com/preview). You can also zoom in even further on AI and Analytics launches with our [dedicated post](https://www.enterprisedb.com/blog/solving-enterprise-generative-ai-and-analytics-challenges-zooming-our-q4-2024-release) about them.

Loading

2 comments on commit a0dbcf8

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.