social.tchncs.de is one of the many independent Mastodon servers you can use to participate in the fediverse.
A friendly server from Germany – which tends to attract techy people, but welcomes everybody. This is one of the oldest Mastodon instances.

Administered by:

Server stats:

3.8K
active users

#vectordatabase

0 posts0 participants0 posts today

Implementing effective "similarity" search in Java with Quarkus beyond simple keyword matching.

- How vector embeddings enable semantic understanding.
- Integrating and querying vector databases within Quarkus.
- Building a movie similarity search feature.
Essential reading for Java developers and architects exploring AI applications.

quarkus.io/blog/movie-similari

quarkus.ioMovie similarity search using vector databasesQuarkus: Supersonic Subatomic Java

Another blog post from me to close out the week: datastax.com/blog/five-genai-t

This is all about what Astra DB can do for your GenAI app. It's more than just basic vector search, it's integrations, performance and even generating vectors for you.

DataStax5 GenAI Things You Didn't Know About Astra DB | DataStaxLearn some unexpected ways Astra DB helps you to build accurate, low-latency, RAG-powered generative AI apps.

If you're familiar with vector databases and you like Perl, you're probably disappointed that there's not much support for them in Perl.

I've written an article (and a github repo), showing how you can use pgvector to use PostgreSQL as a vector database.

#AI #GenAI #PostgreSQL #Perl #VectorDatabase

curtispoe.org/articles/using-v

curtispoe.orgUsing Vector Databases with PerlVector databases are amazing, but there are few options with Perl. If you use PostgreSQL, now you have one.

Cloud AI is revolutionizing industries, but what about all the settings where you have no or low connectivity? The answer is 𝐋𝐨𝐜𝐚𝐥 𝐀𝐈.

Thanks to rapid advancements in (Large) lAnguage models, there are more and more poerful 𝐒𝐦𝐚𝐥𝐥 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐌𝐨𝐝𝐞𝐥𝐬 and with ObjectBox also a lightweight, fast and energy-efficient 𝐥𝐨𝐜𝐚𝐥 𝐯𝐞𝐜𝐭𝐨𝐫 𝐝𝐚𝐭𝐚𝐛𝐚𝐬𝐞.

Read about Local AI - what it is and why we need it here:

objectbox.io/local-ai-what-it-

Finally, the 𝐯𝐞𝐫𝐲 𝐟𝐢𝐫𝐬𝐭 𝐨𝐧-𝐝𝐞𝐯𝐢𝐜𝐞 𝐯𝐞𝐜𝐭𝐨𝐫 𝐝𝐚𝐭𝐚𝐛𝐚𝐬𝐞 𝐟𝐨𝐫 𝐀𝐧𝐝𝐫𝐨𝐢𝐝 is here

Some may know it, Android was our first love ❤️ (we did start developing for the Android os before its initial release, though that wasn’t ObjectBox yet) and the Android community is still dear to our hearts, so we’re particularly happy to bring this new tech to Java & Android developers today.

Local AI working on Mobile (without Internet, on the actual devices!) opens up a myriad of new use cases in low, no, or intermittent connectivity scenarios as well as any use cases that have QoS requirements, or “just” need - or want - to keep data private.

We can’t wait to see what Android developers will do with it 🎨, even when they are faster coding an app than we are with releasing the tech😅 - let us know

Python developers, did you know you can now do local / on-device RAG on commodity hardware without the need for an Internet connection, and no need to share the data?

The battle-tested on-device database, ObjectBox, has just extended their Python support. It’s a very lightweight and fast vector database alternative you can run on almost any hardware.

Even with millions of documents, ObjectBox finds the nearest neighbours within milliseconds on commodity hardware. #python #coding #local #vectordatabase #vectorsearch

objectbox.io/python-on-device-

There are so many #vectordatabase companies now (#Chroma, #Qdrant, #Weaviate, #Pinecone, #Zilliz to name a few) that it feels like a big #AI bubble is growing. All you need for a vector database is basically an embedding model and a retrieval method based on a distance metric for the vectors like cosine similarity. In Berlin alone there are two companies - mixedbread.ai and jina.ai - who only work on embedding models. Is this an AI bubble which is going to burst or something different?

Continued thread

1. Technical people will plug in their existing solutions and approaches into AI and get entrenched further into their inefficiencies.

2. Business people will plug in their goals and get a solution with the illusion of modularity and flexibility; they will still be shackled to compete rewrites under the rationalization that an #MVP is throw-away.

So do it properly instead:

Small systems will use specification-to-implementation models (#LLM). In larger systems, specification-to-EM models will be used by the business, whereas EM-to-specification models will be used by technical people. There will be 3 #vectordatabase instances helping orchestrate these different goals.

This is what Event Modeling was doing all along. We were just the mechanical turk before technology caught up. If you're not ready for AI yet, adopt Event Modeling so you're ready when your hand is forced to do so.

AI helped writing this post. It cost me, at most, half the time it would have otherwise.