Grid view
Report abuse
Use this data
Sign up for free
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
Drag to adjust the number of frozen columns
Name
Link
Format
Priority
Notes
Information hazards: Why you should care and what you can do
https://www.lesswrong.com/s/r3dKPwpkkMnJPbjZE/p/6ur8vDX6ApAXrRN3t
Forum Post
10
Very useful development of heuristic to determine when a piece of information might warrant consideration of risk and what might be an appropriate response. Good intro to info hazards and why they're important.
The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?
https://arxiv.org/abs/2001.00463
Paper
10
Extremely relevant, exactly what Nonlinear is thinking about. Super useful.
Bioinfohazards
https://forum.effectivealtruism.org/posts/ixeo9swGQTbYtLhji/bioinfohazards-1
Forum Post
9
Very relevant, tons of great examples (both real and toy).
Needed: AI infohazard policy
https://www.alignmentforum.org/posts/3D3DsX5rMbk3jEZ5h/needed-ai-infohazard-policy
Forum Post
9
Describes the need for AI info hazard policy but doesn't propose solutions. Some interesting comments.
What are information hazards?
https://forum.effectivealtruism.org/posts/Nc5EjccDTfmcrG93j/what-are-information-hazards
Forum Post
8
Intro. Definitional, easier to read than Bostrom paper. Terminology. Good place to start to have an initial sense of what an info hazard is.
Kevin Esvelt: Mitigating catastrophic biorisks
https://forum.effectivealtruism.org/posts/9iPdD5veF78kQmhiv/kevin-esvelt-mitigating-catastrophic-biorisks
Forum Post
8
Only some segments relevant but great insight from someone who's devoted quite a bit of time to this topic and who is on the inside.
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks
https://forum.effectivealtruism.org/posts/KPwgmDyHaceoEFSPm/informational-hazards-and-the-cost-effectiveness-of-open
Forum Post
8
Useful discussion of info hazard strategic considerations. LW version: https://www.lesswrong.com/posts/wSkHFu79xBKaaMmxM/informational-hazards-and-the-cost-effectiveness-of-open
Thoughts on The Weapon of Openness
https://forum.effectivealtruism.org/posts/2aYBDbH9cSs3SDA7K/thoughts-on-the-weapon-of-openness
Forum Post
7
Relevant summary and thoughts on an essay that argues for the costs of secrecy outweighing the benefits.
Exploring the Streisand Effect
https://forum.effectivealtruism.org/posts/hYG63G77pFQQcxhmM/exploring-the-streisand-effect
Forum Post
6
Great breakdown of Streisand Effect, but overall topic is only somewhat helpful when thinking about info hazards broadly.
Thoughts on the Scope of LessWrong's Infohazard Policies
https://www.lesswrong.com/posts/nx94BD6vBY23rk6To/thoughts-on-the-scope-of-lesswrong-s-infohazard-policies
Forum Post
6
Interesting thinking-through of if a certain LessWrong post about the CDC and COVID should be removed because of its downside risk, but not super useful insights.
The Fusion Power Generator Scenario
https://www.lesswrong.com/posts/2NaAhMPGub8F2Pbr7/the-fusion-power-generator-scenario
Forum Post
6
Somewhat useful but not hugely.
Good and bad ways to think about downside risks
https://www.lesswrong.com/s/r3dKPwpkkMnJPbjZE/p/NdDrh3ZRJvuv7BcL9
Forum Post
5
Somewhat useful for outlining good and bad mindsets to have when thinking about info hazards/downside risks in general.
The Vulnerable World Hypothesis
https://nickbostrom.com/papers/vulnerable.pdf
Paper
5
Footnotes 39 and 41 in particular (according to Aird)
Terrorism, Tylenol, and dangerous information
https://forum.effectivealtruism.org/posts/CoXauRRzWxtsjhsj6/terrorism-tylenol-and-dangerous-information
Forum Post
4
Mostly useful for a couple real-world examples.
What areas are the most promising to start new EA meta charities - A survey of 40 EAs
https://forum.effectivealtruism.org/posts/ACrNwP2xxMoxtekbd/what-areas-are-the-most-promising-to-start-new-ea-meta
Forum Post
4
Small amount of insight into how EA community members view info hazards interacting with the community.
Open Communication in the Days of Malicious Online Actors
https://forum.effectivealtruism.org/posts/qWJyPiws7B4XyQGJR/open-communication-in-the-days-of-malicious-online-actors
Forum Post
4
Doesn't seem super useful to informing consideration of potential info hazards, although it's possible that spending more time on it would produce some valuable insights. Does bring up another aspect of potential harm from communicating information within communities, which is also interesting to think about.
Horsepox synthesis: A case of the unilateralist’s curse?
https://thebulletin.org/2018/02/horsepox-synthesis-a-case-of-the-unilateralists-curse/
Article
4
Some interesting thoughts on secrecy interacting with the unilateralist's curse, but not hugely relevant overall. Doesn't provide a lot of answers/recommendations.
Why making asteroid deflection tech might be bad
https://forum.effectivealtruism.org/posts/vuXH2XAeAYLc4Hxyj/why-making-asteroid-deflection-tech-might-be-bad
Forum Post
3
Provides an example of a potential info hazard but overall not super useful.
Memetic downside risks: How ideas can evolve and cause harm
https://www.lesswrong.com/s/r3dKPwpkkMnJPbjZE/p/EdAHNdbkGR6ndAPJD
Forum Post
3
Minorly useful for adding some nuance to consideration of info hazards.
Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical
https://www.lesswrong.com/posts/k8qLzbHTubMjCHL2E/lessons-from-the-cold-war-on-information-hazards-why
Forum Post
3
Reinforces need to go about secrecy in a thoughtful way with a historical example. Not a lot of new insights.
The Precipice (p. 135-137)
https://theprecipice.com/
Book
3
Some general discussion of info hazards relevant to biohazards but no new information.
A point of clarification on infohazard terminology
https://www.lesswrong.com/posts/Rut5wZ7qyHoj3dj4k/a-point-of-clarification-on-infohazard-terminology
Forum Post
3
Terminology. Sets aside different term for specific kind of info hazard. Not broadly useful, and terminology doesn't seem to be widely accepted either.
[Review] On the Chatham House Rule (Ben Pace, Dec 2019)
https://www.lesswrong.com/posts/aE5q2Mb8zQNo4eKxy/review-on-the-chatham-house-rule-ben-pace-dec-2019
Forum Post
3
Somewhat relevant considerations of overall secrecy. Most interesting part is comment by jbash.
Knowing About Biases Can Hurt People
https://www.lesswrong.com/posts/AdYdLP2sRqPMoe8fb/knowing-about-biases-can-hurt-people
Forum Post
3
Fairly good example of an info hazard for individuals who learn certain information, but no useful insights.
[Commentary] The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?
https://www.lesswrong.com/posts/H8fTYkNkpYio7XG8L/link-and-commentary-the-offense-defense-balance-of
Forum Post
2
Doesn't add very much useful beyond the paper itself.
Managing risk in the EA policy space
https://forum.effectivealtruism.org/posts/Q7qzxhwEWeKC3uzK3/managing-risk-in-the-ea-policy-space
Forum Post
2
Not very relevant.
What harm could AI safety do?
https://forum.effectivealtruism.org/posts/ciKv8MRJ7gYyGS65o/what-harm-could-ai-safety-do
Forum Post
2
Question; not super useful answers.
Assessing global catastrophic biological risks (Crystal Watson)
https://forum.effectivealtruism.org/posts/zDL7HP5pXkabYRTzW/assessing-global-catastrophic-biological-risks-crystal
Forum Post
1
Not super relevant, only small mention of info hazard-relevant things.
Mapping downside risks and information hazards
https://www.lesswrong.com/s/r3dKPwpkkMnJPbjZE/p/RY9XYoqPeMc8W8zbH
Forum Post
1
Not very useful for thinking about info hazards in a practical sense.
How to avoid accidentally having a negative impact with your project
https://forum.effectivealtruism.org/posts/otLCoYN3neacjBy48/max-dalton-and-jonas-vollmer-how-to-avoid-accidentally
Forum Post
1
Video and transcript. Not very relevant.
Ways people trying to do good accidentally make things worse, and how to avoid them
https://80000hours.org/articles/accidental-harm/
Article
1
Not super relevant
X-risks of SETI and METI?
https://forum.effectivealtruism.org/posts/kwr6Asjsrz7ruigWr/x-risks-of-seti-and-meti
Forum Post
0
Not useful at all.
Infohazards: The Future Is Disbelieving Facts?
https://forum.effectivealtruism.org/posts/nEGhoRyspefaERCqp/infohazards-the-future-is-disbelieving-facts
Forum Post
0
Not useful at all.
On the Chatham House Rule
https://www.lesswrong.com/posts/sWof2zGexwwJ8Q4ND/on-the-chatham-house-rule
Forum Post
0
Not relevant.
Memetic Hazards in Videogames
https://www.lesswrong.com/posts/muXfZr5EYCfZqLmsb/memetic-hazards-in-videogames
Forum Post
0
Not relevant.
Information hazards: a very simple typology
https://forum.effectivealtruism.org/posts/X5S2ZB4RcPxZGN68T/information-hazards-a-very-simple-typology
Forum Post
Intro, terminology
Open until dangerous — gene drive and the case for reforming research
https://forum.effectivealtruism.org/posts/AmwrDXR7QPzTEGjZi/george-church-kevin-esvelt-and-nathan-labenz-open-until
Video
Collection of all prior work I found that seemed substantially relevant to information hazards
https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=dTghHNHmc5qf5znMQ
Collection of Resources
Accidental harm
https://forum.effectivealtruism.org/tag/accidental-harm
Collection of Resources
Information Hazards in Biotechnology
https://onlinelibrary.wiley.com/doi/full/10.1111/risa.13235
Paper
Information Hazards: A Typology of Potential Harms from Knowledge
https://nickbostrom.com/information-hazards.pdf
Paper
Foundational paper
Information Hazards
https://www.lesswrong.com/tag/information-hazards
Collection of Resources
LW tag
Strategic Implications of Openness in AI Development
https://www.nickbostrom.com/papers/openness.pdf
Paper
Unintended consequences
https://en.wikipedia.org/wiki/Unintended_consequences
Article
Has examples of different kinds of unintended consequences.
Counterproductive Altruism: The Other Heavy Tail
https://onlinelibrary.wiley.com/doi/abs/10.1111/phpe.12133
Paper
A brief history of ethically concerned scientists
https://www.lesswrong.com/posts/hxaq9MCaSrwWPmooZ/a-brief-history-of-ethically-concerned-scientists
Forum Post
A few misconceptions surrounding Roko's basilisk
https://www.lesswrong.com/posts/WBJZoeJypcNRmsdHx/a-few-misconceptions-surrounding-roko-s-basilisk
Forum Post
Winning vs Truth – Infohazard Trade-Offs
https://www.lesswrong.com/posts/2ta6Bo6D5efif4fXB/winning-vs-truth-infohazard-trade-offs
Forum Post
Shock Level 5: Big Worlds and Modal Realism
https://www.lesswrong.com/posts/SkXLrDXyHeekqgbFg/shock-level-5-big-worlds-and-modal-realism
Forum Post
USA v Progressive 1979 excerpt
https://www.lesswrong.com/posts/TKW5PD8fS2ej7JqxG/usa-v-progressive-1979-excerpt
Forum Post
Historical example.
The Weapon of Openness
http://www.transrio.com/en-ingles/wp-content/uploads/2009/01/The-Weapon-of-Openness.pdf
Paper
Pokémon contagion: photosensitive epilepsy or mass psychogenic illness?
https://europepmc.org/article/med/11235034
Paper
Only maybe related but fantastic premise.
Forbidden Knowledge
https://science.sciencemag.org/content/sci/307/5711/854.full.pdf?casa_token=YTWRrPw2MhAAAAAA:8m0Bztx7hzcz0oTRJO-2gXdpPW6T95DklbCE09Ux7u9Ycbi9i3LLg4UECKQ0SdSLbeZ3IWn_eVQ5-Q
Paper
Pretty short.
Inoculating science against potential pandemics and information hazards
https://scholar.harvard.edu/files/kleelerner/files/rpp_20181004_plos_pathogens_-_inoculating_science_against_potential_pandemics_and_information_hazards.pdf
Paper
Focused on bio-hazards.
Agential risks and information hazards: An unavoidable but dangerous topic?
https://pdf.sciencedirectassets.com/271788/1-s2.0-S0016328717X00116/1-s2.0-S0016328717302732/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEE8aCXVzLWVhc3QtMSJIMEYCIQC4En6myrpFWhTzbdubBUfUxC4KmQrpgR%2FOxnSLg4KsZQIhAIJt%2BHEaPedGkc%2Fl8ubK2rnffMFhabaJKbNypdg579rzKvoDCCgQBBoMMDU5MDAzNTQ2ODY1IgzwOfVX5dB4jUKjGgEq1wNf64IyM45YuBGLD43UyjiHlI3sAiMBSKqxsw7hFxXcWbux096ETsttHh%2BHK%2F%2F1af3mRIJBk2%2BB%2Fy4t1pg70Z4oj1aWYVn2nl6In12NVh8pSzLEVMfEbi4kZgPC1VWOdEDvXTKcyMG8gXfZuE2DPBIyPu5C9yfJUvg6nRbZWr%2BahmW2uQI4ox0YvsOW57rp7jCL%2F1dtY133Xfp%2FvPUG9JvboNjgGrl%2Fb7n8wiJhMTU6HeZUxEFke8z0vfvJSk2ZkuvfBgycojXi8OW2Jd0RLg9jAOKNpd6EbU3n07BLdKAOZGN2o5dc2DHuT7hELCcqgbGKNhmtnWGxuBLbIoDBnfjWEiMva67j%2BteKPRfc5PrfUY7z8xJhDJ5tWJtjtFG%2FYbKBI%2Br8%2BGk54mpVdMvdS9%2BzBWEJcWesHHb%2FG67ydacjTyFPcSipxL%2BOsx18CSKOgieN7LUvsgxAPt8Io%2BwmtyaI6CQMlJRnple13Igxz472GyTeHBhJ8yybn6NG6SgadlsL61WJOWJPAE4%2FkemTsUNBrKqCeAbFzt6jjhNhmlNNJXL7jNrFwruraO%2BVugEikiDG8841X3MJSUPVQgcmA3R2N4VTI4Z9MVx0rabWdlFcW0rvOfckWdkw1vDVhgY6pAG3hMUvDnYomk%2F6RogXj6AY4nx3li0MAOieVDb2xwozzVnsM%2B2xS0MeEKkESKme0ipbS4mxqto2Qd%2FHXv8LqXPiBYyLCkIiNAqtiKYv1gz3rKfLub3ZyXtTqIjGrNY9%2BhZuzPApGwyOHcGbRI1U8WLE0EErBxYhbEvYp2kLt8oLtdoodGA32Tdf370DWogz37iomwo0xtOcW9%2FHaNkAThh3JcIXEQ%3D%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20210625T064838Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAQ3PHCVTYVYWW44J7%2F20210625%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=020df4d56013c99f968720fa06711131d78c1461f12e0f2ff6ee9755cfd6de4c&hash=568401604e02e7895bba1e02c92dea6345ffdbada904e3e226bac16da41d3067&host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&pii=S0016328717302732&tid=spdf-c974a2a5-0306-46d9-b515-588d7ede4932&sid=8e4c1eb41ca8674fb04839a53714d4f5e495gxrqa&type=client
Paper
Counteracting the Spread of Socially Dangerous Information on the Internet: A Comparative Legal Study
https://books.google.com/books?hl=en&lr=&id=wxRwDwAAQBAJ&oi=fnd&pg=PA135&ots=buLeq0_odB&sig=J32Rl1isUdoVasS1bwpP4osyQwU#v=onepage&q&f=false
Paper
Discusses "socially dangerous information." Might be too dense.
Dangerous Knowledge: A Case Study in the Social Control of Knowledge
https://journals.sagepub.com/doi/abs/10.1177/144078337801400201?journalCode=josa
Paper
Concerned with educational setting, only maybe related.
Producing Dangerous Knowledge: researching knowledge production in Belgium
https://journals.sagepub.com/doi/pdf/10.2304/eerj.2011.10.2.252
Paper
Might not be related.
Dangerous Knowledge
https://digital.library.sbts.edu/bitstream/handle/10392/2431/2004-03-07.pdf?sequence=1
Article
Morality of knowledge.
Public Controversy and the Production of Nonknowledge
https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1573-7861.2011.01259.x?casa_token=FXBJFtlmVc0AAAAA:cfxKMGFnHY9hbbG0Ihd-dJAgW4DsB6V-cMBR5GvzGC8Qp6IkAupAztVSpKUyKBmdpyIpBQM5eB-1fik
Paper
Forbidden Knowledge
https://science.sciencemag.org/content/sci/307/5711/854.full.pdf?casa_token=JiZnWV7pm70AAAAA:Gh9Iq4l3ax8G5jLia1J4cHxVkyhqjoVIWcrQJgoIV9aTanw_hRC4F9Ee2gWJDiV66SzEkthbwNlpXw
Article
Stratospheric aerosol injection research and existential risk
http://johnhalstead.org/wp-content/uploads/2018/03/Halstead-Stratospheric-aerosol-injection-research-and-exist.pdf
Paper
Crtl+f - "information hazard"
What Are the Guiding Ethical Principles of Science Communication?
https://link.springer.com/chapter/10.1007/978-3-030-32116-1_9
Paper
Simulation Typology and Termination Risks
https://arxiv.org/ftp/arxiv/papers/1905/1905.05792.pdf
Paper
Information hazards mentioned on page 17.
Unethical Research: How to Create a Malevolent Artificial Intelligence
https://arxiv.org/ftp/arxiv/papers/1605/1605.02817.pdf
Paper
Mentioned information hazards on page 7. Listed as one of the ways a malevolent AI could try to harm humanity.
The Possibility of an Ongoing Moral Catastrophe
https://assets.ctfassets.net/ohf186sfn6di/1UzEHuHjxaSOMs02IcUKS0/055ec7f9f5367b82ea2e3db532db5818/williams2015_--_Moral_Catasatrophe.pdf
Paper
Mention on page 8.
An AI Race for Strategic Advantage: Rhetoric and Risks
https://dl.acm.org/doi/pdf/10.1145/3278721.3278780
Paper
Risk Mysterianism and Cognitive Boosters
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.390.6946&rep=rep1&type=pdf
Paper
Mention on page 3.
AI Research Considerations for Human Existential Safety (ARCHES)
https://arxiv.org/pdf/2006.04948.pdf
Paper
Mention on page 32.
The Unilateralist’s Curse and the Case for a Principle of Conformity
https://www.tandfonline.com/doi/full/10.1080/02691728.2015.1108373
Paper
Brief mention of information hazards.
Forecasting AI Progress: A Research Agenda
https://arxiv.org/ftp/arxiv/papers/2008/2008.01848.pdf
Paper
Mention on page 36.
Racing to the precipice: a model of artificial intelligence development
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.434.7824&rep=rep1&type=pdf
Paper
Leakproofing the Singularity Artificial Intelligence Confinement Problem
http://cecs.louisville.edu/ry/LeakproofingtheSingularity.pdf
Paper
Some mention of information hazards, specifically AI hazards.
Taxonomy of Pathways to Dangerous AI
https://arxiv.org/ftp/arxiv/papers/1511/1511.03246.pdf
Paper
A “psychopathic” Artificial Intelligence: the possible risks of a deviating AI in Education
https://sciendo.com/downloadpdf/journals/rem/11/1/article-p93.pdf
Paper
Security Solutions for Intelligent and Complex Systems
https://books.google.com/books?hl=en&lr=&id=V4TvDAAAQBAJ&oi=fnd&pg=PA37&ots=FkOPKi8CJK&sig=0zBJWbSi3QYTxMr6a7oiXfmEOS8#v=onepage&q&f=false
Paper
Thinking Inside the Box: Controlling and Using an Oracle AI
https://zoo.cs.yale.edu/classes/cs671/12f/12f-papers/oracle.pdf
Paper
Mention on page 8.
Detecting Qualia in Natural and Artificial Agents
https://arxiv.org/ftp/arxiv/papers/1712/1712.04020.pdf
Paper
Mention on page 10. Maybe not very related.
Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning
https://arxiv.org/pdf/1907.11274.pdf
Paper
Seems very good on first glance!
Chess as a Testing Grounds for the Oracle Approach to AI Safety
https://arxiv.org/ftp/arxiv/papers/2010/2010.02911.pdf
Paper
A Game of Stars: Active SETI, radical translation and the Hobbesian trap
https://pdf.sciencedirectassets.com/271788/1-s2.0-S0016328718X0007X/1-s2.0-S0016328718300405/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEF0aCXVzLWVhc3QtMSJIMEYCIQD0rtc7WtpjaVqknyNnmtGpUDQ%2Fmrv2ZDTvaJyZa9y%2BygIhAJJt6NO7xy2hp6sMbfs7xArmRU0rYUyQVLi4GZClUSEiKvoDCDUQBBoMMDU5MDAzNTQ2ODY1IgzaaVUH3f1%2BJw6QiiQq1wM0CMR%2BkhUADQLJzMel1aCT6GqugZYJDGsZ4xZnfqorju2dIvPpaIDv%2Bq0w3C5tycwze%2FlWQJFMnaWJPJUsahSUXA52mwJfWn98i21vt3fg0MDiqpi52HIRfLx3mWt9DTG6fig8MOy3Cpv%2Fedwg14T8bAvtu4FjOx7D6qTqsDknH2sEc%2BNsWVJxh3NfWZyf6iHvM1hXI0MBAgfb3%2FhRtti%2Fvfug1lKD8Q6oEO6435ZhKYtK9ZlZ1QnBniCXTTPAmkbEAB1zExQqtN%2FSO64rh0LrhNwGq5VaZaCX0EAQOFjPT8Q5KBultpDHneq68Ui%2B8QMK2FnKVzqcUqGmyW%2Bea7ajck8a737mBcR%2FOHI0oNbP%2BesEpJ8QmcC8LGKJLwwWFZAkZP8GriZyTOLpEp5qNjiQCuoTy2H8iNTfNsLnqppLz3z6hBiDjM8bXxwdxGyulVYmfO%2BEjv6QvJ5bRrXcKUR3qAl%2BkJSPV6N%2BuxwsZOZw3PKOvcDmuxiLQWjRPWKA1IUjSdh666DG0RaQuAUwQnL%2BcVYW%2FxL5AUGhK88di%2FUZNSNjGXzpwg1wRn6sbumyxHcPborYKU0ZqA83bpkV7sBbzNSb%2BJvZbvMujoisVoG0DOUC5FIJ1TAwoe7YhgY6pAGEjbeBMHSjUciFpjNX5Zy0OTKCeWK0IgEX1gdstMlAcB6JXlEOmd3QHW0EihDG6fdzNRQpTCCAetP0gjlWojbpQDWM484Uc4EaEne0tFF9%2FmAwt%2FishBzR0pLZ40uccu5%2FXs6dErZJNPlVqZtAjcVTFG1HFs25TbQhAk6lNXEL4YJSt1arexMyv9V%2BSr5BrsFasNhN168FJYaH4vqcM8OIbvPeIA%3D%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20210625T213454Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAQ3PHCVTYWFEBCC6B%2F20210625%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=577d7a5fb5924a1c3782e69efa3776c919797fe5c09c5e3e04d8ad9f79f4b607&hash=b72307ac7d2ac65802c4c9fc12f25b9894f52bca8e152cedb25adc2854593cd4&host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&pii=S0016328718300405&tid=spdf-4a28a7bb-0097-402e-9cb0-67ed9b489230&sid=8e4c1eb41ca8674fb04839a53714d4f5e495gxrqa&type=client
Paper
Seems useful on first glance!
The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents
https://www.nickbostrom.com/superintelligentwill.pdf
Paper
Mention on page 10. Maybe not relevant.
Forbidden knowledge in machine learning reflections on the limits of research and publication
https://link.springer.com/content/pdf/10.1007/s00146-020-01045-4.pdf
Paper
Seems very good on first glance!
Fast, accurate, and secure DNA synthesis screening with random adversarial thresholds
https://www.securedna.org/download/Random_Adversarial_Threshold_Screening.pdf
Paper
Seems very good on first glance! Bio-risk focus.
Operations security
https://en.wikipedia.org/wiki/Operations_security
Article
Wikipedia article for process to look into
85 records

Alert

Lorem ipsum
Okay