Hopefully I can muster up the energy to start this new project:
Put up lots of thermometers and hygrometers in the apartment, have them report their readings wireless to a database.
I suspect that I’ll have to “build” these myself, because ready-to-use kits most like require some sort of cloud service. Dunno, haven’t checked yet.
@alexonit@twtxt.alessandrocutolo.it My problem is I don’t see a world where we don’t employ some form of cryptography to use as keys for threads in databases and other such things honestly. I’m not going to use url#timestamp
as keys.
I corrupted my SQLite test database with sed -i s/… $(find …)
. Clearly, I found too many files. That’s the signal to go to bed.
@kat@yarn.girlonthemoon.xyz @kat@yarn.girlonthemoon.xyz Pretty sure I have many more mentions in the database than the one and only one I see hmmm 🤔 – I’ll have a look at the code when I can and the SQL query it’s using
Chances are the database bought wasn’t cheap at all and was aold by some scam company that probably ripped them from six figures or more for a database that’s full of rubbish. 🤣
Now that’s interesting. Some of these bots start crawling at URLs like this:
That is obviously completely wrong. But I can explain it. Some years ago, I screwed up my nginx rewrite rules, and that’s how these broken URLs came to be.
It all redirects to /git
now, which is why that endpoint sees so much traffic lately.
But what does that mean? Why do they start there? I can only speculate that this company bought an old database of web links and they use that to start crawling. And it was probably a cheap one, because these redirects have been fixed for quite a long time now.
linode’s having a major outage (ongoing as of writing, over 24 hours in) and my friend runs a site i help out with on one of their servers. we didn’t have recent backups so i got really anxious about possible severe data loss considering the situation with linode doesn’t look great (it seems like a really bad incident).
…anyway the server magically came back online and i got backups of the whole application and database, i’m so relieved :‘)
[$] The second half of the 6.16 merge window
The 6.16 merge window
closed on June 8, as
expected, containing 12,899 non-merge commits. This is
slightly more than the 6.15 merge window, but well in line with expectations.
7,353 of those were merged after
the summary of the first half of the merge\
window was written. More detailed statistics can be found in
[the LWN kernel source database](https://lwn … ⌘ Read more
微服務架構:必懂的 5 種設計模式
1. Database per Service (每個服務一個數據庫)目標 / 目的實現微服務之間的松耦合。 增強服務的獨立性、可伸縮性和數據封裝。 關鍵概念 / 工作方式每個微服務管理自己的私有數據庫。 數據庫只能由擁有該服務的服務訪問。 強制執行清晰的邊界並促進單一職責原則。 數據隔離 減少服務之間的依賴。 技術靈活性 服務可以使 ⌘ Read more
@kat@yarn.girlonthemoon.xyz yes, both the newsletter and the podcast, from time to time.
@prologic@twtxt.net I was not expecting much, but since the list of restaurants near company buildings, was hard coded into it, I did expect it to at least copy the menu text, from the websites, in its database. Ironically, the only restaurant where it got something right, is the only one, where the websites has the text as a transparent PNG, the AI has to convert to text.
I asked ChatGPT what it knows about Twtxt 😂 And surprisingly it’s rather accurate:
Twtxt is a minimalist, decentralized microblogging format introduced by John Downey in 2016. It uses plain text files served over HTTP—no accounts, databases, or APIs.
In 2020, James Mills (@prologic@twtxt.net) launched Yarn.social, an extended, federated implementation with user discovery, threads, mentions, and a full web UI.
Both share the same .twtxt.txt format but differ in complexity and social features.
@bender@twtxt.net Exactly. I suspect it was because of sqlitebrowser
also accessing the database in parallel to debug the original issue.
So far, I have not found the exact reason why some replies don’t show up. When I do not filter for unread messages and show all, though, I actually see them. So, there’s that.
I just noticed that my unread messages counter was off by quite a bit. It showed 8, but I only saw one unread message. Even after restarting my client, which recalculates the number of unread messages, it remained at eight. Weird. Looking in the database revealed that this is indeed correct.
Apparently, my query to build up the message tree must be incorrect. It somehow misses seven messages. They all are orphaned, maybe that’s a clue. However, generating missing root messages (and thereby including the replies) typically works just fine. Hmm.
@movq@www.uninformativ.de json and database put together sounds terrifying. i must try jenny
jenny really isn’t well equipped to handle edits of my own twts.
For example, in 2021, this change got introduced:
https://www.uninformativ.de/git/jenny/commit/6b5b25a542c2dd46c002ec5a422137275febc5a1.html
This means that jenny will always ignore my own edits unless I also manually edit its internal “json database”. Annoying.
That change was requested by a user who had the habit of deleting twts or moving them to another mailbox or something. I think that person is long gone and I might revert that change. 🤔
@prologic@twtxt.net is it twice on database, or simply rendering twice? If you manually expunge it, will it affect the yarn?
[$] Supporting untorn buffered writes
At last year’s
Linux Storage, Filesystem,
Memory-Management, and BPF Summit (LSFMM+BPF), there was a discussion about atomic writes that was
accompanied by patches to support the feature in the block layer, and for
direct I/O on XFS. That
work was merged, but another piece of that discussion concerned adding the
feature for buffered I/O, in part because the PostgreSQL database currently
has to jump through hoops to ensure that its writes are not “torn”
(partial … ⌘ Read more
@xuu@txt.sour.is Wow, that’s a giant graveyard. In my new database I have 16,428 messages as of now. Archive feed support is not yet available, so it’s just the sum of all the 36 main feeds.
tt
reimplementation that I already followed with the old Python tt
. Previously, I just had a few feeds for testing purposes in my new config. While transfering, I "dropped" heaps of feeds that appeared to be inactive.
Thanks, @movq@www.uninformativ.de!
My backing SQLite database with indices is 8.7 MiB in size right now.
The twtxt
cache is 7.6 MiB, it uses Python’s pickle
module. And next to it there is a 16.0 MiB second database with all the read statuses for the old tt
. Wow, super inefficient, it shouldn’t contain anything else, it’s a giant, pickled {"$hash": {"read": True/False}, …}
. What the heck, why is it so big?! O_o
A collection of postgreSQL patterns that you can use in other databases
https://mccue.dev/pages/3-11-25-life-altering-postgresql-patterns
#postgresql #databases
(Back in tt
.) Well, it kinda worked. At least appending to the file. But my cache database got screwed up. I do not yet support replies, so the subject and and root hash columns have not been set at all, resulting in a message that is just not shown at all. I gotta do something about that next. The good thing is, though, after simply fixing the two columns the message appeared on screen.
wahhh i wanna work towards my dream of offering pay as you can web hosting (static & dynamic) but i don’t know how!!!!! i keep drifting towards hosting panels but i don’t exactly have fresh linux servers for those nor do i like the level of access they require. so i’m like ok i can do the static site part with SFTP chroot jails and a front-end like filebrowser or something…. but then what about the dynamic sites!!!!!!! UGH
granted i doubt i’d get much interest in dynamic sites but i’d like to do this old school where i can offer people isolated mySQL databases or something for some project (i’m thinking PHP based fanlistings), which means i could do it the old school way of… people ask me to run it and i do it for them. but i kind of want to let people have access to be able to do it themselves just short of giving them SSH access which isn’t happening
@andros@twtxt.andros.dev If something fits in a CSV file, it typically doesn’t require a database. I agree with that. Depending on the application, more complicated queries might benefit from a database, though. I don’t know awk very well, but I could imagine that grep, sed and cut reach their CSV processing limits rather quickly when you have to deal with escaped (multiline) fields.
I only very rarely have to deal with CSV files or databases in my day to day life. Maybe, these classic Unix tools offer some tricks I’m not aware of. When I have some more complicated CSV input, I generally reach for Python.
pls elaborate on a ‘p2p database’, ‘all story’ and ‘Registries’.
My first thought takes me to something like secure-scuttlebutt
which it’s painful to sync data using clients, and too slow compared to downloading a text file.
Also I’d like for twtxt to avoid becoming an ActivityPub. Works well but it’s uses too many resources IMO.
https://kingant.net/2025/02/mastodon-the-cost-of-running-my-own-server/
I’m defending being able to self-host your Web client (like you’d do with a Wordpress, twtxt is a micrologging, at the end), instead of federated instances, so in a first thought I’d say Registries have many disadvantages being the first one that someone has to maintain them active.
What does the #twtxt community think about having a p2p database to store all history? This will be managed by Registries.
@prologic@twtxt.net We often turn to a database when we can use a plain text file, such as a CSV. With sed or awk, you can run simple queries without using a database.
Did I get the context right? 😀
The other day, after a discussion online, we came to the conclusion that using awk+sed+tr could replace much of the development that requires a database. However, using SQLite to have a SQL syntax isn’t a bad idea either. What do you think?
I’m continuing my tt
rewrite in Go and quickly implemented a stack widget for tview. The builtin Pages is similar but way too complicated for my use case. I would have to specify a mandatory name and some additional options for each page. Also, it allows me to randomly jump around between pages using names, but only gives me direct access the first, however, not the last page. Weird. I don’t wanna remember names. All I really need is a classic stack. You open a new fullscreen dialog and maybe another one on top of that. Closing the upper most brings you back to the previous one and so on.
The very first dialog I added is viewing the raw message text. Unlike in @arne@uplegger.eu’s TwtxtReader, I’m not able to include the original timestamp, though. I don’t have it in its original form in the database. :-/
Next up is a URL view.
I think it is not easy to implement, you need a database. Timeline is an elegant solution: read and sort.
FINALLY!! Got #Caddy server up and running and got rid of nginx proxy manager and Mysql database containers 🥳🥳🥳
What is clean architecture? That’s a good question.
You think of a pattern for ordering code with good decisions isolating technologies (you can change the web framework or database without break the business logic), easy to test (you only test interfaces and use cases), sharing code between frameworks (entities and use cases), scalability, modulations and standardizing names. Clean architecture is not perfect, it has a learning curve and some abstraction in each technology. You can even find rejection with yours colleagues.
I have a good article on this topic.
https://programadorwebvalencia.com/implementando-arquitectura-limpia-en-python/
#python
been playing with making fun scripts using charm CLI’s gum library :P
one that gets lyrics from an open lyrics database’s API and accepts input for artist & song names: https://asciinema.org/a/697860
and one that uses a user-provided last.fm API key to pull what’s currently playing or what last played on your account :) https://asciinema.org/a/697874
5 分鐘搞懂 Golang 數據庫連接管理
本文介紹瞭如何在 Golang 中優化數據庫連接,通過有效管理連接來提高應用程序吞吐量。原文: Optimizing Database Connections in Go: Improving Throughput by Managing Open Connections Efficiently[1]Go 的 database/sql 軟件包提供了自動化數據庫連接池,能夠幫助開發人員有效管理連 ⌘ Read more
iPad Mini 7 Benchmarks Confirm 8GB RAM, 5-Core GPU’s Slower Speeds
The seventh-generation iPad mini has now appeared on Geekbench, confirming that it has 8GB of memory and revealing how the 5-core GPU version of the A17 Pro chip performs.
The new iPad mini, identified as [iPad 16,2 on the Geekbench database](https://browser.geekbench.com … ⌘ Read more
@prologic@twtxt.net that “little database that could” is simply amazing, isn’t it? I run Conduwuit (nevermind, this one is RocksDB), and GoToSocial using it as a backend, no issues. And, of course, sqlite is the database of choice for a lot of things under iOS.
I demand full 9 digit nano second timestamps and the full TZ identifier as documented in the tz 2024b database! I need to know if there was a change in daylight savings as per the locality in question as of the provided date.
BTW this code doesn’t incorporate existing twts into jenny’s database. It’s best used starting from scratch. I’ve been testing it using a custom XDG_CACHE_HOME and XDG_CONFIG_HOME to avoid messing with my “real” jenny data.
I wrote some code to try out non-hash reply subjects formatted as (replyto ), while keeping the ability to use the existing hash style.
I don’t think we need to decide all at once. If clients add support for a new method then people can use it if they like. The downside of course is that this costs developer time, so I decided to invest a few hours of my own time into a proof of concept.
With apologies to @movq@www.uninformativ.de for corrupting jenny’s beautiful code. I don’t write this expecting you to incorporate the patch, because it does complicate things and might not be a direction you want to go in. But if you like any part of this approach feel free to use bits of it; I release the patch under jenny’s current LICENCE.
Supporting both kinds of reply in jenny was complicated because each email can only have one Message-Id, and because it’s possible the target twt will not be seen until after the twt referencing it. The following patch uses an sqlite database to keep track of known (url, timestamp) pairs, as well as a separate table of (url, timestamp) pairs that haven’t been seen yet but are wanted. When one of those “wanted” twts is finally seen, the mail file gets rewritten to include the appropriate In-Reply-To header.
Patch based on jenny commit 73a5ea81.
https://www.falsifian.org/a/oDtr/patch0.txt
Not implemented:
- Composing twts using the (replyto …) format.
- Probably other important things I’m forgetting.
Can I get someone like maybe @xuu@txt.sour.is or @abucci@anthony.buc.ci or even @eldersnake@we.loveprivacy.club – If you have some spare time – to test this yarnd
PR that upgrades the Bitcask dependency for its internal database to v2? 🙏
VERY IMPORTANT If you do; Please Please Please backup your yarn.db
database first! 😅 Heaven knows I don’t want to be responsible for fucking up a production database here or there 🤣
Hmmmm, I somehow run into an encoding problem where my inserted data end up mangled in the database. But, both SQLite and Go use UTF-8. What’s happening here? :-?
@bender@twtxt.net Yes, they do 🤣 Implicitly, or threading would never work at all 😅 Nor lookups 🤣 They are used as keys. Think of them like a primary key in a database or index. I totally get where you’re coming from, but there are trade-offs with using Message/Thread Ids as opposed to Content Addressing (like we do) and I believe we would just encounter other problems by doing so.
My money is on extending the Twt Subject extension to support more (optional) advanced “subjects”; i.e: indicating you edited a Twt you already published in your feed as @falsifian@www.falsifian.org indicated 👌
Then we have a secondary (bure much rarer) problem of the “identity” of a feed in the first place. Using the URL you fetch the feed from as @lyse@lyse.isobeef.org ’s client tt
seems to do or using the # url =
metadata field as every other client does (according to the spec) is problematic when you decide to change where you host your feed. In fact the spec says:
Users are advised to not change the first one of their urls. If they move their feed to a new URL, they should add this new URL as a new url field.
See Choosing the Feed URL – This is one of our longest debates and challenges, and I think (_I suspect along with @xuu@txt.sour.is _) that the right way to solve this is to use public/private key(s) where you actually have a public key fingerprint as your feed’s unique identity that never changes.
A new uncensored search engine has been launched by enthusiasts, I don’t know if you’ve heard about it or not https://cdg.iounews.com The search engine uses its own database for searching. It works even for me in Windows XP
Correct, @bender@twtxt.net. Since the very beginning, my twtxt flow is very flawed. But it turns out to be an advantage for this sort of problem. :-) I still use the official (but patched) twtxt
client by buckket to actually fetch and fill the cache. I think one of of the patches played around with the error reporting. This way, any problems with fetching or parsing feeds show up immediately. Once I think, I’ve seen enough errors, I unsubscribe.
tt
is just a viewer into the cache. The read statuses are stored in a separate database file.
It also happened a few times, that I thought some feed was permanently dead and removed it from my list. But then, others mentioned it, so I resubscribed.
你是如何用 Go 語言 配置高性能 sql-DB 的?
配置高性能的 sql.DB 是 Go 應用程序開發中一個重要的環節,特別是在需要處理大量數據庫查詢的情況下。以下是一些最佳實踐和配置建議:1. 數據庫連接池的配置Go 的 database/sql 包提供了連接池的功能,您可以通過設置最大空閒連接、最大打開連接數和連接的最大生命週期來優化連接池。db, err := sql.Open(“mysql”, ”user:password@tcp(127. ⌘ Read more
Authy Users Urged to Stay Alert After Hack Exposes 33 Million Phone Numbers
Twilio has updated its Authy two-factor authentication (2FA) service after a hacker claimed to have retrieved 33 million phone numbers from its user database.
TechCrunch reports that the hacker(s) known as ShinyHunters took to a well-kn … ⌘ Read more
Haha, yeah sorry about that, I wasn’t even trying to nuke the database either but it worked out that way 😩
@prologic@twtxt.net Righteo, so rookie error - I obviously had some untracked, rather important files for starting my pod and I ran a make clean
. Why I originally had them in the git directory is anyone’s guess. Anyway it blew away those files including the database so that’s that. So your good self and @bender@twtxt.net etc - apologies but your profiles got nuked as well (as did my own but easily recreated).
Another thing I noticed which was the reason I ran make clean
in the first place. I noticed my pod was being built with Go 1.22.4. Could this be a problem @prologic? preflight.sh
actually errors out about it…
Gorm 源碼解析
我們先通過一張圖來看 Gorm 核心主流程。gorm 主流程1. 初始化 DB 連接—————-使用 database.sql 初始化連接。我們平時所說的數據庫驅動其實就是每個數據庫對 DSN 不同的解析方式,最終底層都是使用的 TCP 建立起數據庫連接。type Connector interface { Connect(context.Context) (Conn, err ⌘ Read more
New Beats Pill Appears in FCC Database Ahead of Launch
Multiple celebrities have been spotted with a new version of the Beats Pill speaker, and today the device showed up in an FCC database, suggesting that we’re getting closer to a potential launch.
FCC filings typically happen just weeks ahead of when a product launches, so we could see this new Beats Pill speaker sometime in June. Apple plans to launch [Solo Buds … ⌘ Read more
sqlx: 功能強大的數據庫訪問庫
sqlx[1] 是一個用於擴展標準庫 database/sql 的庫,它提供了一些額外的功能,使得在 Go 中使用 sql 更加方便。sqlx 的目標是保持 database/sql 的簡單性,同時提供更多的功能。sqlx 爲 Go 的標準 database/sql 庫提供了一組擴展。sqlx 中的 sql.Conn、sql.DB、sql.TX、sql.Stmt、sql.Rows、sql.Row ⌘ Read more
Go 配置文件大揭祕:INI 文件讀寫實戰詳解
*1. INI 文件簡介INI(Initialization)文件是一種簡單、文本文件格式,常用於配置文件。它由多個節(section)組成,每個節包含多個鍵值對。鍵值對的格式爲 key=value,節的格式爲 [section]。簡單示例如下:// 示例INI文件[database]host = localhostport = 3306username = userpassword = sec ⌘ Read more
Thinking of building a simple “Things our kids say” database form, using Node, Express and SQlite3. Going beyond simple text files.
From my small experience in writing an event database, I am inclined to agree with this.
If you’re looking for a cool p2p database system have a look at www.earthstar-project.org
@abucci@anthony.buc.ci Where did I hate on SQL databases? 🤔
@lyse@lyse.isobeef.org flawed is the right word, no harsh at all. Good reading, and thanks for supporting the possibility of convincing @prologic@twtxt.net to switch to a database! :-D :-P
@eldersnake@we.loveprivacy.club Several reasons:
- It’s another language to learn (SQL)
- It adds another dependency to your system
- It’s another failure mode (database blows up, scheme changes, indexs, etc)
- It increases security problems (now you have to worry about being SQL-safe)
And most of all, in my experience, it doesn’t actually solve any problems that a good key/value store can solve with good indexes and good data structures. I’m just no longer a fan, I used to use MySQL, SQLite, etc back in the day, these days, nope I wouldn’t even go anywhere near a database (for my own projects) if I can help it – It’s just another thing that can fail, another operational overhead.
Why and how GitHub encrypts sensitive database columns using ActiveRecord::Encryption
You may know that GitHub encrypts your source code at rest, but you may not have known that we encrypt sensitive database columns as well. Read about our column encryption strategy and our decision to adopt the Rails column encryption standard. ⌘ Read more
Git’s database internals V: scalability
This fifth and final part of our blog series exploring Git’s internals shows several strategies for scaling your Git repositories that match related database sharding techniques. ⌘ Read more
Git’s database internals IV: distributed synchronization
We’re examining Git’s internals to help make your engineering system more efficient. This post views Git as a distributed database and looks into its synchronization techniques, specifically ‘git fetch’ and ‘git push’. ⌘ Read more
Git’s Database Internals III: File History Queries
Git’s file history queries use specialized algorithms that are tailored to common developer behavior. Level up your history spelunking skills by learning how different history modes behave and which ones to use when you need them. ⌘ Read more
Git’s database internals II: commit history queries
This post explores Git commit history as a database where ‘git log’ is the query language. Learn about Git’s custom query index – the commit-graph file – and how to make sure it’s enabled in your repositories. ⌘ Read more
Git’s database internals I: packed object store
This blog series will examine Git’s internals to help make your engineering system more efficient. Part I discusses how Git stores its data in packfiles using custom compression techniques. ⌘ Read more
Introducing Trilogy: a new database adapter for Ruby on Rails
We’ve open sourced Trilogy, the database adapter we use to connect Ruby on Rails to MySQL-compatible database servers. ⌘ Read more
Hi, I am playing with making an event sourcing database. Its super alpha but I thought I would share since others are talking about databases and such.
It’s super basic. Using tidwall/wal as the disk backing. The first use case I am playing with is an implementation of msgbus. I can post events to it and read them back in reverse order.
I plan to expand it to handle other event sourcing type things like aggregates and projections.
Find it here: sour-is/ev
@prologic@twtxt.net @movq@www.uninformativ.de @lyse@lyse.isobeef.org
GitHub Advisory Database now supports Erlang and Elixir packages!
We’re excited to announce that the GitHub Advisory Database now includes curated security advisories on Erlang, Elixir, and more. ⌘ Read more
GitHub now publishes malware advisories in the GitHub Advisory Database
To combat the prevalence of malware in the open source ecosystem, GitHub now publishes malware occurrences in the GitHub Advisory Database. These advisories power Dependabot alerts and remain forever free and usable by the community. ⌘ Read more
An update on recent service disruptions
Over the past few weeks, we have experienced multiple incidents due to the health of our database. We wanted to share what we know about these incidents while our team continues to address them. ⌘ Read more
** 2022-02-24 feature/6.0 Android test plan **
OverviewWill test the upgrade path from a known state to new version to ensure that settings and app state are maintained during upgrade process.
V. 6.0 of libro.fm android app introduces an entirely new local database. This testing is focused on ensuring that local data remains intact between versions.
NotesThis evening I was mostly focused on setting up a successful build of feature/6.0 on my test device or the emulator. So far, no dice. My next … ⌘ Read more
GitHub Advisory Database now open to community contributions
Anyone can now provide additional information to further the community’s understanding and awareness of security advisories. ⌘ Read more
Code scanning and Ruby: turning source code into a queryable database
A deep dive into how GitHub adds support for new languages to CodeQL. ⌘ Read more
Thinking beyond SQL injection: OWASP tips for secure database access
When it comes to secure database access, there’s more to consider than SQL injections. OWASP Top 10 Proactive Control C3 offers guidance. ⌘ Read more
Video: C Programming on System 6 - A New On-Disk Database Format
It’s a new year and my computer is still old. ⌘ Read more
The complexity is a feature. It means standards can be replaced with products that let providers get their cut. It means putting data into the slowest most expensive database in cost and enviromnmental impact.
GitHub Advisory Database now powers npm audit
Today, we’re adding a proxy on top of the GitHub Advisory Database that speaks the `npm audit` protocol. This means that every version of the npm CLI that supports security audits is now talking directly to the GitHub Advisory Database. ⌘ Read more
Partitioning GitHub’s relational databases to handle scale
In 2019, to meet GitHub’s growth and availability challenges, we set a plan in motion to improve our tooling and ability to partition relational databases. ⌘ Read more
GitHub Advisory Database now supports Rust
We’re excited to announce that the GitHub Advisory Database now includes curated security advisories on the Rust ecosystem! ⌘ Read more
You’ve basically already left, whether you know it or not. Yesterday they nuked their services database. I’d been there ~20 years, but it’s dead. Libera.chat has been lovely.
Think of it like buying a signed print of a photo, instead of the photo itself, but the “signature” is an entry in a database and that’s all you get. Still dumb.
The lospec palette list is a database of palettes for pixel art: [[https://lospec.com/palette-list]] #links #pixelart #color
huh. it seems that dumping + gzipping a SQLite database can sometimes have better compression than gzipping the SQLite database directly. cool. #sqlite
It works better if you start up its database first.
Baserow: Open source online database tool https://gitlab.com/bramw/baserow #airtable alternative ⌘ https://baserow.io/
here is the script I use to convert my twtxt feed into a SQLite database: !twtxt_sqlite
a unique thing I do with my twtxt feed is convert it to a SQLite database. This, combined with the Janet + SQLite scripting abilities available in SQLite, could provide interesting metrics and insights over time.
in particular, twtxt provides timestamps. weewiki doesn’t really track the passage of time. it only wants to be a key/value database with org markup.
Posted to Entropy Arbitrage: Database Basics https://john.colagioia.net/blog/2020/04/05/database.html #database #intro #education #preparation
I loved coding ToH, I want to write more database-less websites/services
How Does a Database Work? | Let’s Build a Simple Database https://cstack.github.io/db_tutorial/
Database as Filesystem - YouTube https://www.youtube.com/watch?v=wN6IwNriwHc
Setup Syncthing to mirror buku bookmark database
Notion “ The all-in-one workspace for your notes, tasks, wikis, and databases. https://www.notion.so/tools-and-craft/03-ted-nelson
Interesting idea: create a search-by-meaning for functions using the memoization database http://www.vpri.org/pdf/rn2017002_memoization.pdf
GitHub - orbitdb/orbit-db: Peer-to-Peer Databases for the Decentralized Web https://github.com/orbitdb/orbit-db
Is there a term for absurd euphemisms constructed for censoring dialogue for television – like ‘melon farmer’ and ‘this is what happens when you meet a stranger in the alps’? Is there a database of them?
Are there enough shared answers in the jeopardy questions & answers database to make a ‘Ladies, if he X, then he’s not your man, he’s Y’ bot from that corpus? Assume 4 questions per answer.
Bad idea of the day: a database of maps of conceptual spaces that are drawn like maps of physical spaces (ex., xkcd’s map of the internet & Knuppe’s map of the fields of mathematics)
I love it. I have a program that needs to processing about half a million records, which will take 3 days. The database that all those records are suppose to go to is acting up after I’ve just done 140K records.
The design and implementation of modern column-oriented database systems | the morning paper https://blog.acolyer.org/2018/09/26/the-design-and-implementation-of-modern-column-oriented-database-systems/