https://wwp.psmad.com/redirect-zone/5b4afd34 server { location ~ /ads.txt { return 301 https://srv.adstxtmanager.com/64671/cooktestyhealthyfoods.blogspot.com/ } } g oogle-site-verification: google0a0ea0be72188b78.html British Intelligence Agency Warns Of Dangers Posed By AI Chatbots Skip to main content

Featured

This New Update On X Allows Blocked Users To View Your Public Posts

Elon Musk-run X social media platform has started to launch an update that will allow people you've blocked to continue to see your posts and your followers' lists. According to the company, If your posts are set to public, accounts you have blocked can see your posts. "We're starting to launch the block function update," said X Engineering in a post. "However, they cannot engage (like reply, repost, etc.) with your posts," according to X guidelines. Block is a feature that helps you control how you interact with other accounts on X. This feature helps people restrict specific accounts from following, Direct Messaging, and engaging with them. The tech billionaire had earlier claimed that stopping people from seeing your public posts "makes no sense." Now, X is rolling out its controversial update to the block feature, allowing people to view your public posts even if you have blocked them. "Accounts you have blocked cannot follow yo

British Intelligence Agency Warns Of Dangers Posed By AI Chatbots

The latest technical innovations, such as ChatGPT and competing chatbots, are making people curious and dubious about them at the same time. These innovations also have benefits and security threats, just like any other technology.

The leading UK security body, the National Cyber Security Centre, has noted the harm that these chatbots can cause and cautioned users not to enter personal or sensitive information into the software in order to evade the potential hazards from them.

Two technical directors, David C and Paul J, discussed the primary causes for concern-privacy leaks and usage by cybercriminals-on the National Cyber Security Centre's blog.

The experts said in the blog that "large language models (LLMs) are undoubtedly impressive for their ability to generate a huge range of convincing content in multiple human and computer languages." However, they're not magic, they're not artificial general intelligence, and they contain some serious flaws."

According to them, the tools can get things wrong and "hallucinate" incorrect facts, as well as be biased and often gullible.

"They require huge compute resources and vast data to train from scratch.They can be coaxed into creating toxic content and are prone to "injection attacks," wrote the tech directors.

For instance, the NCSC team states: "A question might be sensitive because of data included in the query or because of who is asking the question (and when). Examples of the latter might be if a CEO is discovered to have asked 'how best to lay off an employee?" or somebody asks revealing health or relationship questions. 

"Also bear in mind the aggregation of information across multiple queries using the same login."



from NDTV News- Topstories https://ift.tt/qhJpR4a

Comments

Popular Posts