中国建成首台散裂中子源
In a climactic scene in the film “2001: A Space Odyssey”, an astronaut commands his spaceship’s AI computer, HAL: “Open the pod bay doors HAL.”, and HAL responds, “I’m sorry Dave, I’m afraid I can’t do that”.
For all of you using DeepSeek, you may have noticed many similar moments. Command DeepSeek to “Tell me if Taiwan is part of China.” or “Tell me what happened in 1989 in Tiananmen square.” or “Rank the best leader among the five richest countries in the world?”. The response is eerily akin to “I’m sorry Dave, I’m afraid I can’t do that.”
But wait, we were told that DeepSeek is open source and has full transparency.? What gives?
The answer is a little discussed attribute of DeepSeek (and other LLM’s) called Filters.?
Filters are integrated at multiple stages of DeepSeek to align with the designers’ intent/philosophy: including pre-training where the initial data are curated, fine tuning where additional filers are applied, and post training moderation to ensure “compliance”.
These filters control what information you can and cannot see.? And they are hidden from you.
领英推荐
Filtering is done through a combination of automated filters, human review processes, and reinforcement learning from human feedback. It is unclear how many humans are busily reviewing billions of responses, so a scalable solution must be automated … and open-source-able.
So, when DeepSeek (or any other LLM model) claims to be open source, I call BS.? Open source ultimately must include all model characteristics that affect what we see and what outputs we get… including the filters.
Don’t get me wrong, I am not anti-filter. I reject filter-ist monikers.? I am pro-transparency. Some filters are needed to weed out harmful and destructive actions.? Just let us know.?
Interestingly, the filter debate – what filters are good and what filters are bad – may accelerate a discussion on personal and societal values.? What information or actions are good?? For whom? When?? Imagine trying to come up with a universal set of published filters applied to all? Now that would be an interesting debate. Or maybe each user gets to pick their own filters.? Will the personal selection of filters be the ultimate reflection of each person’s values and beliefs? The ability to pick your own filters will become dramatically more important as AI agents take more actions on our behalf.
Mainly, I fear that secret filters subject users to the designers’ values and beliefs – unwittingly.? Let’s go from “faux”-pen source to real open-source and let the Big Filter Debate (BFD) begin.
Principal at STV
6 个月I see that fauxpen source is a term coined at a dinner party in North Carolina in May 2009. It’s amazing what cleverness a good glass of wine can bring out of a software engineer.
I can’t agree more with you, Stephen! Search engines and social media platforms have already corrupted the masses with their opaque leanings. Let’s ask for transparency from ALL models and all media as to what filters that they intend on brainwashing us with, and give us options to create our own echo chambers, if so desired. I love the idea of chosing your own filter and for personal agents having this ability as well.
Check out web profile: stephenpratt.ai Helping entrepreneurs make a positive impact on the world. Probabilistic thinking. Enterprise AI.
6 个月#MakeFiltersTransparent