SlideShare a Scribd company logo
1 of 17
Download to read offline
US 20140222524A1
(19) United States
(12) Patent Application Publication (10) Pub. N0.: US 2014/0222524 A1
Pluschkell et a]. (43) Pub. Date: Aug. 7, 2014
(54) CHALLENGE RANKING BASED ON USER (52) US. Cl.
REPUTATION IN SOCIAL NETWORK AND cpc ................................ .. G06Q 10/0637 (2013.01)
ECOMMERCE RATINGS USPC ....................................................... .. 705/7.36
(71) Applicant: Mindjet LLC, San Francisco, CA (US)
(72) Inventors: Paul Pluschkell, Pleasanton, CA (US); (57) ABSTRACT
Dustin W. Haisler, Elgin, TX (US);
$5161 Charboneau’ Independence’ MO A network-based rating system provides a mechanism
whereby users can submit and rate challenges in a challenge
(73) Assignee: Mindjet LLC’ San Francisco, CA (Us) processing step and submit and rank ideas in an idea process
ing step that correspond to the challenges in the challenge
(21) Appl, No.1 13/959,733 processing step. The highest rated challenge is determined in
the challenge processing step and is then submittedto users in
(22) Filedi Aug- 5, 2013 an idea processing step. Alternatively, challenges submitted
_ _ by users in the challenge processing step are submitted to
Related U‘s‘ Apphcatlon Data users for ideas in the idea processing step on an ongoing basis.
(60) provisional application No_ 61/679,747’ ?led on Aug In these alternative embodiments, the challenge rated in the
5, 2012~ idea processing step is a challenge other thanthe highest rated
challenge determined in the challenge processing step. In
Publication Classi?cation another embodiment, the users rating challenges are the same
users rating ideas in the idea processing step and in other
(51) Int. Cl. embodiments the users are not the same as those rating ideas
G06Q 10/06 (2006.01) in the idea processing step.
N ETWORK-BASED
RATING SYSTEM
1
IIIEEMTRAL
3 SERVER’—
wv.
IIU
Patent Application Publication Aug. 7, 2014 Sheet 1 0f 9 US 2014/0222524 A1
NETWORK-BASED
RATING SYSTEM
1
le //
/
FIG. 1
Patent Application Publication Aug. 7, 2014 Sheet 2 0f 9 US 2014/0222524 A1
START 40
 /
CHALLENGE
PROCESSING
“70
CHALLENG
PROCESSING
COMPLETE?
NO
IDEA
PROCESSING T100
IDEA
PROCESSING
COMPLETE?
115
IDEA ,120
IMPLEMENTATION
IMPLEMENTATION 125
IDEA ,130
MEASUREMENT
IDEA
MEASUREMENT
COMPLETE?
135
Patent Application Publication Aug. 7, 2014 Sheet 3 0f 9 US 2014/0222524 A1
START
i
CHALLENGE PROCESSING -" 70
CHALLENGE
PROCESSING
COMPLETE?
NO
85
YES
IDEA PROCESSING '100
IDEA
PROCESSING
COMPLETE?
115
FIG. 3
Patent Application Publication Aug. 7, 2014 Sheet 4 0f 9 US 2014/0222524 A1
START / 7°
DISPLAY A FIRST CHALLENGE TO THE USERS OF A RATING SYSTEM THAT
SOLICITS THE USERS TO SUBMIT SECOND CHALLENGES FOR RATING. ‘“ 71
I STORE AN INITIAL REPUTATION VALUE FOR EACH OF THE USERS I“ 72
I
I RECEIVE A FIRST OF THE SECOND CHALLENGES FROM A FIRST USER '— 73
I
IRECEIVE AN ACTUAL RATING OF THE SECOND CHALLENGE FROM A SECOND USERI— 74
I
FOR EACH ACTUAL RATING, GENERATING A CORRESPONDING EFFECTIVE
CHALLENGE RATING (ER), WHERE HOW THE AR IS ADJUSTED IS A FUNCTION
OF: A) THE REPUTATION (RP) OF THE USER WHO SUBMITTED THE AR, ,_ 75
B) THE FRESHNESS (RF) OF THE AR, AND C) A PROBABILITY THAT THE
USER WHO GENERATED THE ACTUAL RATING ACTS WITH THE CROWD IN
GENERATING ACTUAL RATINGS (PT).
I
DETERMINE UPDATED REPUTATION VALUE BASED ON THE EFFECTIVE VALUE ’76
I
REPLACE THE INITIAL REPUTATION VALUE WITH THE UPDATED REPUTATION VALUE-77
78
END OF
COMPUTING
CYCLE?
NO
DETERMINE A RANKING OF THE SECOND CHALLENGES BASED IN PART ON THE
EFFECTIVE RATING. m 79
I
DETERMINE WHICH OF THE SECOND CHALLENGES HAS THE HIGHEST RATING. '— 80
I
PUBLISH THE CHALLENGE WITH THE HIGHEST RATING TO USERS OF THE RATING
SYSTEM AND SOLICIT IDEAS FROM THE USERS TO SOLVE THE HIGHEST RANKED "‘ 81
CHALLENGE
@
FIG. 4
Patent Application Publication Aug. 7, 2014 Sheet 5 0f 9 US 2014/0222524 A1
WT we
§_li 1 K/
POST rg‘x CHALLENGE TE] THE USER3 QF THE $YSTEM‘.
ADVERT-58E A REWARD.
A USER SUBMBTS AN OBJECT TC: BE RATED {RU}. W182
A USER RATES AN RQ OF ANOTHER USER B‘s” SUBMiTTiNG AN ACTUAL Q“ 133
RATENG {AR} FUR THAT RU.
i
ADJUQT EACH ACTUAL RATTNG {AR} THEREBY GEMERATTNG AN
CORRESPDNDENG EFFECTWE RATTNG (ER): WHERE HGW THE AR i5
ADJUSTED is A PUNCH—TON SE: A} THE REFUTATiQN {RP} GF THE USER w“ 164
WHO SUBMETTED THE AIR, 8') THE FRESHNESS {RF} {3F THE AR, AND C} A
PRC’BABTLTTY THAT THE USER WHO GENERé‘sTED THE ACTUAL RATTNG
ACTS WWH THE CRGWD iN GENERATING ACTUAE RATSNGS {PT}.
i
REDETEI‘RMTNE THE REPUTATiDN (RP) OF THE USER WHO SUBMH'TED - 105
THE RO.
v‘/"' 106
—*€Tl cmwuma CYCLE? f)
ERR ff,
RANK USERS ACCORDTNG TQ USER REPUTATTGN {RP}. DTSPLAY 5*, W 167
RANKTNG OF USERS.
FOR EACH RC3 USE THE ERS SUBMTTTED FOR THAT RD TO RANK THE W 108
RC3. DISPLAY A RANKTNG OF ROS. ’
ixM)» “iii
fit-PHALLENGE OVER ?
‘l’f,GRANT REWARS To THE USER WHCI SUEMUTED THE Hafiz—TEST
RANKED RC). “" “0
Patent Application Publication Aug. 7, 2014 Sheet 6 0f 9 US 2014/0222524 A1
POST
21
INITIAL CHALLENGE:
HOW DO WE MAKE OUR 20
PRODUCTS MORE
ENVIRONMENTALLY
FRIENDLY?
SYSTEM ADMINISTRATOR OR EXECUTIVE
SPONSOR POSTS AN INITIAL OR FIRST
CHALLENGE
FIG. 6
INITIAL CHALLENGE:
HOW DO WE MAKE OUR
PRODUCTS MORE
ENVIRONMENTALLY SUBMIT
FRIENDLY?
23
TYPE YOUR RESPONSIVE
CHALLENGE HERE ~22
THE INITIAL CHALLENGE AS PRESENTED
TO A USER
FIG. 7
Patent Application Publication Aug. 7, 2014 Sheet 7 0f 9 US 2014/0222524 A1
INITIAL CHALLENGE:
HOW DO WE MAKE OUR
PRODUCTS MORE
ENVIRONMENTALLY
FRIENDLY?
23
HOW CAN THE COMPANY
REDUCE THE POWER 22
CONSUMPTION OF ITS J
PRODUCTS?
USER TYPES IN THE
USER'S RESPONSIVE
THE USER SUBMITS A CHALLENGE As HOW TO
RESPONSIVE CHALLENGE (RC) MAKE THE|R PRODUCTS
MORE ENVIRONMENTALLY
FRIENDLY.
FIG. 8
INITIAL CHALLENGE:
HOW DO WE MAKE OUR PRODUCTS MORE
ENVIRONMENTALLY FRIENDLY?
RESPONSIVE CHALLENGE 12
HOW CAN THE COMPANY...
RESPONSIVE CHALLENGE 22 24
XXXXXXXXXXXXXXXXXX
+1 —1
RESPONSIVE CHALLENGE 3:
XXXXXXXXXXXXXXXXXX
+1 -1
THE USER SUBMITS AN ACTUAL RATING FOR EACH RC SUBMITTED
(AR) TO RATE THE RESPONSIVE BY ANOTHER USER' THE
USER CAN SUBMIT AN AR
CHALLENGE OF ANOTHER BY SELECT|NG AN AR
VALUE OF -1 OR +1
FIG. 9
Patent Application Publication Aug. 7, 2014 Sheet 8 0f 9 US 2014/0222524 A1
INITIAL CHALLENGE:
HOW DO WE MAKE OUR PRODUCTS MORE
ENVIRONMENTALLY FRIENDLY?
RANK OF RESPONSIVE CHALLENGES:
RESPONSIVE CHALLENGE 1: HOW CANRESPONSIVE CHALLENGE 2: XXXXXXXRESPONSIVE CHALLENGE 3: XXXXXXXRESPONSIVE CHALLENGE 4: XXXXXXXRESPONSIVE CHALLENGE 5: XXXXXXXRESPONSIVE CHALLENGE 6: XXXXXXXRESPONSIVE CHALLENGE 7: XXXXXXX
DISPLAY OF THE CURRENT RANKING
OF RESPONSIVE CHALLENGES
FIG. 10
CHALLENGE: 21
HOW CAN THE COMPANY
REDUCE THE POWER 20
CONSUMPTION OF OUR T"
PRODUCTS?
SYSTEM ADMINISTRATOR POSTS THE
HIGHEST RANKED RESPONSIVE
CHALLENGE AS A NEW CHALLENGE
FIG. 11
Patent Application Publication Aug. 7, 2014 Sheet 9 0f 9 US 2014/0222524 A1
CHALLENGE:
HOW CAN THE COMPANY REDUCE THE
POWER CONSUMPTION OF OUR PRODUCTS?
REWARD = $1000.00
TYPE YOUR IDEA HERE ~22 23
THE CHALLENGE AS PRESENTED TO A USER
FIG. 12
US 2014/0222524 A1
CHALLENGE RANKING BASED ON USER
REPUTATION IN SOCIAL NETWORK AND
ECOMMERCE RATINGS
CROSS REFERENCE TO RELATED
APPLICATION
[0001] This application claims the bene?t under 35 U.S.C.
§119 from provisional US. patent application Ser. No.
61/679,747, entitled “Challenge Ranking Based on User
Reputation in Social Network and Ecommerce Rating Sys
tems,” ?led on Aug. 5, 2012, the subject matter of which is
incorporated herein by reference.
BACKGROUND INFORMATION
[0002] Identifying and solving strategic challenges are
critical activities that can greatly in?uence an organization’s
success. Examples of strategic challenges that many compa
nies experience include what new products or product fea
tures are required by the marketplace. Other companies have
challenges regarding how to reduce costs or how to raise
capital. The ability of organizations to identify these chal
lenges can be determinative of the organization’s success.
This ability can also be a competitive advantage for the orga
nization ifthe company is able to identify and solves strategic
challenges fasterthanits competitors. Many companies, how
ever, do not use ef?cient processes for identifying and solving
strategic challenges.
[0003] Companies also tend to rely on a few individuals to
identify and solve the organization’s strategic challenges.
Companies may also employ serial approaches to identify
these challenges. For example, a corporation may rely on a
single executive sponsor to identify and propose a strategic
challenge to all other employees of the company. After this
?rst challenge has been solved, the sponsormay then identify
and propose subsequent challenges. Because this process is
inef?cient and does not utilize more employees in the chal
lenge identi?cation process, a better methodology is desired.
SUMMARY
[0004] A network-based rating system provides a mecha
nism whereby users can submit and rate challenges in a chal
lenge processing step and submit and rank ideas in an idea
processing step that correspondto the challenges submittedin
the challenge processing step. In one embodiment the highest
rated challenge is determined inthe challenge processing step
and then that challenge is submitted to users in an idea pro
cessing step. In other embodiments, challenges submitted by
users in the challenge processing step can be submitted to
users for ideas in the idea processing step on an ongoing and
constant basis. In these alternative embodiments, the chal
lenge rated in the idea processing step is a challenge other
than the highest rated challenge determined in the challenge
processing step. In yet another embodiment, the users rating
challenges in the challenging processing step are the same
users rating ideas inthe idea processing step. In other embodi
ments the users rating challenges in the challenging process
ing step are not the same users rating ideas in the idea pro
cessing step.
[0005] The manner in which the rating system operates in
rating andranking the challenges is similarto the operation of
the system when rating and ranking ideas. In one novel
aspect, a user provides an actual rating (AR) of a second
challenge submitted by another user in response to a ?rst
Aug. 7, 2014
challenge. The AR is multiplied by a weighting factor to
determine a corresponding effective rating (ER) for second
challenge. Rather than the ARs ofchallenges being averaged
to determine a ranking of the challenges, the ERs of chal
lenges are averaged to determine a ranking ofchallenges. The
ERs regarding the challenges submitted by a particular user
are used to determine a quantity called the “reputation” RP of
the user. The reputation ofa user is therefore dependent upon
what other users thought about challenges submitted by the
user. Such a reputation RP is maintained for each user ofthe
system. The weighting factor that is used to determine an ER
from anAR is a function ofthe reputation RP ofthe user who
submitted the AR. If the user who submitted the AR had a
higher reputation (RP is larger) then the AR of the user is
weighted more heavily, whereas ifthe user who submitted the
AR had a lowerreputation (RPT is smaller) then theAR ofthe
user is weighted less heavily.
[0006] In a second novel aspect, the weighting factor used
to determine an ER from an AR is also a function of a crowd
voting probability value PT. The crowd voting probability
value PT is a value that indicates the probability that the user
who submitted the AR acts with the crowd in generatingARs.
The crowd is the majority of a population that behaves in a
similar fashion. The probability value PT is determined by
applying the Bayes theorem rule and taking into account the
number of positive and negative votes. If the user who gen
erated the AR is determined to have a higher probability of
voting with the crowd (PT is closer to 1) then the AR is
weighted more heavily, whereas ifthe userwho generated the
AR is determined to have a lower probability of voting with
the crowd (PT is closer to 0) then the AR is weighted less
heavily.
[0007] In a third novel aspect, the weighting factor used to
determine an ER from anAR is a function ofthe freshness RF
ofthe AR. Ifthe AR is relatively old (RF is a large value) then
theAR is weighed less heavily, whereas iftheAR is relatively
fresh (RF is a small value) then the AR is weighed more
heavily.
[0008] In a fourth novel aspect, a decay value D is
employed in determining a user’s reputation. One component
ofthe user’ s reputation is an average ofERs submitted in the
current computing cycle. A second component of the user’s
reputation is a function ofa previously determined reputation
RPT—1 for the user from the previous computing cycle. The
component ofthe user’s reputation due to the priorreputation
RPT—1 is discounted by the decay value D. If the user was
relatively inactive and disengaged from the system then the
decay value D is smaller (not equal to 1 but a little less, for
example, D:0.998) and the impact ofthe user’s earlier repu
tation RPT—1 is discounted more, whereas ifthe user is rela
tively active and engaged with the system then the decay
value D is larger (for example, D:1) and the impact of the
user’s earlier reputation RP—1 is discounted less. As users
submit ARs and ROs and use the system, the reputations of
the users change. The network-based rating system is usable
to solicit and extract challenges from a group ofusers, and to
determine a ranking of the challenges to ?nd the challenge
that is likely the best. A ranking of challenges in order ofthe
highest average ofERs forthe challenge to the lowest average
of ERs for the challenge is maintained and is displayed to
users. At the end ofa challenge processing period, the highest
rated challenge is determined and that challenge is submitted
to users in the idea processing step.
US 2014/0222524 A1
[0009] Further details and embodiments and techniques are
described in the detailed description below. This summary
does not purport to de?ne the invention. The invention is
de?ned by the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The accompanying drawings, where like numerals
indicate like components, illustrate embodiments of the
invention.
[0011] FIG. 1 is a drawing of a network-based rating sys
tem.
[0012] FIG. 2 is a ?owchart of an innovation process in
accordance with one novel aspect.
[0013] FIG. 3 is a ?owchart ofa method ofoperation ofthe
network based rating system 1 in accordance with one novel
aspect.
[0014] FIG. 4 is a ?owchart ofa method ofoperation ofthe
network based rating system 1 for receiving, rating, and
selecting challenges.
[0015] FIG. 5 is a ?owchart ofa method ofoperation ofthe
network based rating system 1 for receiving, rating, and
selecting ideas.
[0016] FIG. 6 is an illustration ofa screen shot of what is
displayed on the screen ofa network appliance when a system
administrator or executive sponsor is posting a ?rst challenge.
[0017] FIG. 7 is an illustration of a screen shot of how the
?rst challenge is presented to users of the system.
[0018] FIG. 8 is an illustration of a page displayed on the
screen ofa user’s network appliance after the user has entered
a second challenge into the page but before the user has
selected the “SUBMIT” button.
[0019] FIG. 9 is an illustration ofa page that displays chal
lenges to the users of the system and solicits the users to
submit actual ratings (ARs).
[0020] FIG. 10 is an illustration of a page that displays a
ranking of second challenges that have been submitted by
users in response to a ?rst challenge.
[0021] FIG. 11 is an illustration ofa screen shot of what is
displayed on the screen of the network appliance of the
administrator ADMIN when the ADMIN is posting a chal
lenge soliciting ideas to solve the highest ranked challenge
from FIG. 10.
[0022] FIG. 12 is an illustration ofa screen shot ofhow the
highest ranked challenge is presented to users of the system
before the user submits the user’s idea for solving the chal
lenge.
DETAILED DESCRIPTION
[0023] Reference will now be made in detail to some
embodiments of the invention, examples of which are illus
trated in the accompanying drawings.
[0024] FIG. 1 is a diagram of a network-based rating sys
tem 1 in accordance with one novel aspect. Each ofthe users
A-F uses an application (for example, a browser) executing
on a networked appliance to communicate via network 8 with
a rating system program 9 executing on a central server 10.
Rating system program 9 accesses and maintains a database
20 of stored rating information. Blocks 2-7 represent net
worked appliances. The networked appliance of a user is
typically a personal computer or cellular telephone or another
suitable input/output device that is coupled to communicate
with network 8. Each network appliance has a display that the
user ofthe network appliance can use to view rating informa
Aug. 7, 2014
tion. The network appliance also provides the user a mecha
nism such as a keyboard or touchpad or mouse for entering
information into the rating system.
[0025] Network 8 is typically a plurality of networks and
may include a local area network and/or the internet. In the
speci?c example described here, a company is interested in
producing environmentally sustainable or ecologically
“friendly” products wants to effectively and ef?ciently iden
tify several strategic challenges associated with producing
these ecologically friendly products. The users A-F are
employees of the company and may include an executive
sponsor. The network 8 is an intra-company private computer
network maintained by the company for communication
betweenemployees whenperforming companybusiness. The
rating system program 9 is administered by the network
administratorADMIN ofthe company network 8. The admin
istratorADMIN interacts with network 8 and central server 9
via network appliance 11.
[0026] FIG. 2 is a ?owchart of an innovation process. In
step 70, the challenge processing step, challenges are identi
?ed, validated, and selected. In step 70, an executive sponsor
or employee of a company may identify a certain challenge
such as a recurring point offrustration ofone or several ofthe
company’s present or future customers. During validation of
the challenge, employees of the company may vote, re?ne,
rate and rank the challenges of other employees. After a
challenge is selected as a result of the validation step, the
challenge is queued for idea submissions. In a step 85, a
determination ofwhether challenge processing is completed
is made. If further challenge processing is required, step 70
continues and ifnot, idea processing begins.
[0027] In the idea processing step 100, the challenge
selected in the challenge processing step is opened to the
employees of the company for idea submissions. Any ideas
submitted then proceed through validation, evolution, selec
tion, and funding processes within step 100. For example, the
employees of the company may submit ideas to solve the
challenge selected in step 70. Ideas are reviewed by the
employees or “crowd” through voting, rating, ranking, page
views, and other mechanisms. The ranked ideas are passed
through a process that re?nes the original concepts for those
ideas and then the top ideas are selected and the idea funding
process within step 115 occurs. During the idea funding pro
cess, employees have an opportunity to select ideas on which
to spend pre-budgeted innovation funds. Next a determina
tion is made in step 115 that idea processing is complete. Ifthe
selection has not been made then further idea processing can
continue. In alternative embodiments, the process can return
to the challenge processing stage for further identi?cation,
validation, and selection of challenges.
[0028] In the idea implementation step, step 120, the
funded ideas are implemented. Once the ideas are imple
mented, then a determination is made whether the ideas
implementation step is complete in a step 125. If the idea
implantation is complete then an idea measurement step 130
commences. If idea implementation is not complete, the idea
implementation phase can continue. In alternative embodi
ments, the process can return to the idea processing step 100
or the challenge processing step 70 after step 120.
[0029] In an idea measurement step 130, the employees
review whether the implemented ideas of the previous step,
solved the original challenge submitted during the challenge
processing step. If a determination is made in a step 135 that
idea measurement has completed then step 70, the challenge
US 2014/0222524 A1
processing step, can be repeated. Ifnot, further idea measure
ment may continue. In one novel embodiment, the innovation
process ofFIG. 2 can begin or continue at any point within the
process.
[0030] FIG. 3 is a ?owchart of a method 50 involving the
operation ofthe network-based rating system ofFIG. 1. FIG.
3 includes a portion 50 ofthe innovation process ofFIG. 2. In
a challenge processing step (step 70) a network-based rating
system is used i) to propose a ?rst challenge to users of the
rating system; ii) to rate second challenges received in
response to the ?rst challenge; and iii) select a ranked chal
lenge. Once the challenge processing step is complete (step
80), ideas for solving the challenges identi?ed in step 70 are
submitted, rated, and selected in an idea processing step (step
100). Once the idea processing step is complete, subsequent
steps (not shown) may occur. Process steps that include fund
ing or implementation of the ideas selected in the idea pro
cessing step are examples of subsequent processing steps. In
an alternative embodiment, the idea and challenge processing
steps occur simultaneously. The challenge processing step
(step 70) and the idea processing step (step 100) will now be
discussed in more detail.
[0031] FIG. 4 is a ?owchart of a method 70 involving an
operation ofthe network-based rating system 1 ofFIG. 1. The
administrator ADMIN or an executive sponsor interacts with
the rating system program 9, thereby causing a ?rst challenge
to be displayed to users A-F of the rating system of FIG. 1.
This ?rst challenge solicits the users to submit second chal
lenges that will then be rated by the users A-F. The ?rst
challenge may be a solicitationto submit challenges for de?n
ing new products orproductroadmaps forthe company, prod
uct features, or any other strategic issue relating to the com
pany or the marketplace in whichthe company competes with
other organizations. Through the system, each user is noti?ed
ofthe initial challenge via the user’ s networked appliance. In
the present example, the initial challenge is titled “HOW
CAN WE MAKE OUR PRODUCTS MORE ENVIRON
MENTALLY FRIENDLY?” The web page that presents the
?rst challenge to a user also includes a text ?eld. The web
page solicits the user to enter into the text ?eld the user’s
challenge, or a second challenge, in response to the ?rst
challenge proposed to the users.
[0032] In the method of FIG. 4, a user views the challenge
web page and in response types the second challenge into the
text box. The user’s challenge may be, respective to that
speci?c user, a challenge that they believe is most relevant to
meeting the corporate objectives framed by the ADMIN or
the executive sponsor in the ?rst challenge. In this example,
the ?rst of the second challenges submitted by a user of the
rating system is “HOW CAN THE COMPANY REDUCE
THE POWER CONSUMPTION OF OUR PRODUCTS.”
After typing the ?rst of the second challenges in response to
the ?rst challenge, the user selects a “SUBMIT” button on the
page, thereby causing the second challenge to be submitted to
and received by (step 73) the rating system. Multiple such
responsive or second challenges can be submitted by multiple
users in this way. An individual user may submit more then
one responsive challenge if desired. As these second chal
lenges are submitted and received by the rating system, a list
ofall the submitted challenges is presented to the users ofthe
system. A user can read the challenges submitted by other
users, consider the merits of those second challenges, and
then submit ratings corresponding to those challenges. The
rating is referred to here as an “actual rating” or an “AR”. In
Aug. 7, 2014
the present example, along with each responsive challenge
displayed to the user, is a pair of buttons. The ?rst button is
denoted “—l”. The user can select this button to submit a
negative rating or a “no” vote for the second challenge. The
second button is denoted “+1”. The user can select this button
to submit a positive rating or a “yes” vote for the responsive
challenge. In the method ofFIG. 4, the user selects the desired
button, thereby causing the actual rating to be submitted to
and received (step 74) by the system. Before the user submits
the AR, the user cannot see the number of +1 ARs and the
number of —l ARs that the second challenge has received.
This prohibits the user from being in?uenced by how others
have voted on the rated object. The system records the AR in
association with the responsive challenge to which the AR
pertains. Multiple ARs are collected in this way for every
challenge from the various users of the system.
[0033] Rather than just using the raw ARs to determine a
consensus of what the users think the best challenge is, each
AR is multiplied by a rating factor to determine an adjusted
rating referred to as an “Effective Rating” or an “ER” (step
75). How theAR is adjusted to determine the associated ER is
a function of: A) a previously determined reputation (RP) of
the user who submitted the AR, B) the freshness (RF) ofthe
AR, and C) a probability that the user who generated the AR
acts with the crowd in generating ARs. The details ofhow an
ER is determined from an AR is described in further detail
below.
[0034] The Reputation (RP) of a user is used as an indirect
measure of how good the challenges submitted the user tend
to be. The user’s reputation is dependent upon ERs derived
from the ARs received from other users regarding the chal
lenges submitted by the user and the rating system sets an
initial reputation for each user (step 72). Accordingly, in the
example of FIG. 4, after a new actual rating AR is received
regarding the second challenge ofa user, the reputation ofthe
user is updated (step 76) and replaces that user’s initial repu
tation (step 77). Ifthe current computing cycle has not ended,
then processing returns to step 73 and new challenges sub
mitted by users in response to the ?rst challenge may be
received into the system. Users may submit actual ratings on
various ones of the second challenges. Each time an actual
rating is made, the reputation of the user who generated the
second challenge is updated.
[0035] After the replacing of the initial reputation value
with the updated reputation value of step 77 has been per
formed, then the next computing cycle starts and processing
returns to step 73 as indicated in FIG. 4. Operation of the
rating system proceeds through steps 73 through 77, from
computing cycle to computing cycle, with second challenges
being submitted and actual ratings on the second challenges
being collected. Each actual rating is converted into an effec
tive rating, and the effective ratings are used to update the
reputations ofthe users as appropriate. After a certain amount
of time, the system determines (step 78) that the challenge
period is over.
[0036] At the end of the computing cycle (step 78), pro
cessing proceeds to step 79. For each second challenge, the
effective ratings for that second challenge are used to deter
mine a rank (step 79) of the challenge with respect to other
challenges. A determination ofthe highest ranked challenge
is also made (step 80). The ranking ofall challenges submit
ted is also displayed to the usersA-F. Inthe illustrated speci?c
embodiment, step 79 occurs near the end of each computing
US 2014/0222524 A1
cycle. In otherembodiments, the ranking ofchallenges can be
done on an ongoing and constant basis. Computing cycles can
be of any desired duration.
[0037] FIG. 5 is a ?owchart of the method 100 of FIG. 3
involving an operation ofthe network-based rating system 1
ofFIG. 1. The administratorADMIN interacts with the rating
system program 9, thereby causing the highest ranked chal
lenge to be posted (step 101) to the users A-F of the system.
The highest ranked challenged was determined by the chal
lengeprocessing step (step 80) ofFIG. 3. Throughthe system,
each user is noti?ed ofthe challenge via the user’ s networked
appliance. In the present example, the challenge is titled
“HOW CAN THE COMPANY REDUCE THE POWER
CONSUMPTION OF OUR PRODUCTS?” The web page
that presents this challenge to a user also includes a text ?eld.
The web page solicits the user to type the user’s idea into the
text ?eld.
[0038] In the method ofFIG. 5, a user views this challenge
advertising web page and in response types the user’s idea
into the text box. The user’s idea is an object to be rated or a
“rated object” in this step. After typing the idea for how the
company can reduce power consumption in its products into
the text box, the user selects a “SUBMIT” button on the page,
thereby causing the Rated Object (RO) to be submitted (step
102) to the rating system. Multiple such ROs are submittedby
multiple users in this way. An individual user may submit
more than one RO if desired. As ROs are submitted, a list of
all the submitted ROs is presented to the users ofthe system.
A user can read the rated objects (ROs) submitted by other
users, consider the merits of those ROs, and then submit
ratings corresponding to those ROs in a manner similar to that
in which second challenges are rated in the challenge pro
cessing step 70 of FIG. 3. The rating is referred to here as an
“actual rating” or an “AR”. Inthe present example, along with
each idea displayed to the user, is a pair of buttons. The ?rst
button is denoted “—l”. The user can select this button to
submit a negative rating or a “no” vote for the idea. The
second button is denoted “+1”. The user can select this button
to submit a positive rating or a “yes” vote for the idea. In the
method ofFIG. 3, the user selects the desired button, thereby
causingtheActual RatingARto be submitted (step 103) to the
system. Before the user submits the AR, the user cannot see
the number of +1 ARs and the number of —l ARs the RO has
received. This prohibits the user from being in?uenced by
how others have voted on the RO. The system records the AR
in association with the RO (the rated object) to which the AR
pertains. Multiple ARs are collected in this way for every RO
from the various users of the system.
[0039] Rather than just using the raw ARs to determine a
consensus of what the users think the best submitted idea is,
each AR is multiplied by a rating factor to determine (step
104) an adjusted rating referred to as an “Effective Rating” or
an “ER”. How the AR is adjusted to determine the associated
ER is a function of: A) a previously determined reputation
(RP) ofthe user who submitted theAR, B) the freshness (RF)
oftheAR, and C) a probability that the userwho generated the
AR acts with the crowd in generatingARs. The details ofhow
an ER is determined from anAR is described in further detail
below.
[0040] The reputation (RP) of a user is used as an indirect
measure of how good ROs of the user tend to be. The user’s
reputation is dependent upon ERs derived from the ARs
received from other users regarding the ROs submitted by the
user. Accordingly, in the example ofFIG. 5, after a new actual
Aug. 7, 2014
ratingAR is received regarding the rated object (the RO) ofa
user, the reputation of the user is redetermined (step 105). If
the current computing cycle has not ended, then processing
returns to step 102. New rated objects may be received into
the system. Users may submitARs onvarious ones ofthe ROs
displayed to the users. Each time an AR is made, the reputa
tion of the user who generated the RO is updated.
[0041] At the end of the computing cycle (step 106), pro
cessing proceeds to step 107. The system determines a rank
ing ofthe users (step 107) based on the reputations (RP) ofthe
users at that time. The ranking ofusers is displayed to all the
users A-F. In addition, for each RO the ERs for that RO are
used to determine a rank (step 108) ofthe RO with respect to
other ROs. The ranking ofall ROs submitted is also displayed
to the users A-F. In the illustrated speci?c embodiment, steps
107 and 108 occur at the end of each computing cycle. In
other embodiments, the ranking of users and the ranking of
ROs can be done on an ongoing constant basis. Computing
cycles can be of any desired duration.
[0042] After the rankings of steps 107 and 108 have been
performed, then the next computing cycle starts and process
ing returns to step 102 as indicated in FIG. 5. Operation ofthe
rating system proceeds through steps 102 through 109, from
computing cycle to computing cycle, with ROs being submit
ted andARs on the ROs being collected. EachAR is converted
into an ER, and the ERs are used to update the reputations of
the users as appropriate. The ranking of users is displayed to
all the users ofthe system in order to provide feedback to the
users and to keep the users interested and engaged with the
system. The public ranking ofusers incentivizes the users to
keep using the system and provides an element of healthy
competition.
[0043] After a certain amount of time, the system deter
mines (step 109) that the challenge period is over. In the
illustrated example, the highest ranked idea (highest ranked
RO) is determined to be the winner ofthe challenge. The user
who submittedthat highest ranked RO is alerted by the system
that the user has won the reward (step 110) for the best idea.
The public nature of the reward and the public ranking of
users and the public ranking of ideas is intended to foster
excitement and competition and future interest in using the
rating system.
[0044] FIG. 6 is an illustration of a screen shot of what is
displayed on the screen of the network appliance of the
administrator ADMIN of the system. The ADMIN is being
prompted to post a ?rst challenge. The ADMIN types a
description ofthe challenge into text box 20 as illustrated, and
then selects the “POST” button 21. This causes the ?rst chal
lenge to be submitted to the system. Alternatively, an execu
tive sponsor may also post and submit the ?rst challenge.
[0045] FIG. 7 is an illustration of a screen shot of what is
then displayed to the users A-F of the system. The initial
challenge is advertised to the users. The text box 22 presented
prompts the user to type a second challenge in response to the
?rst challenge into the text box 22. After the user has entered
the responsive challenge, the user can then select the “SUB
MIT” button 23 to submit the second challenge to the system.
[0046] FIG. 8 is an illustration of a page displayed on the
screen of a user’s network appliance. The user has entered a
second challenge (has typed in a challenge that the user feels
best addresses the ?rst challenge ofhow to make the compa
ny’s products more environmentally friendly) into the text
box 22 before selecting the “SUBMIT” button 23. In this
US 2014/0222524 A1
example, the user’s challenge is “How can the company
reduce the power consumption ofits products?”
[0047] FIG. 9 is an illustration of a page displayed on the
screen of the network appliance of each user of the system.
The page shows each user submitted second challenge as of
the time of viewing. For each second challenge, the user is
presented an associated “—1” selectable button and an asso
ciated “+1” selectable button. For example, if the user likes
the challenge listed as “CHALLENGE 2”, then the user can
select the “+1” button 24 to the right of the listed “CHAL
LENGE 2”, whereas if the user does not like the challenge
listed as “CHALLENGE 2” then the user can select the “—”
button 25 to the right of the listed “CHALLENGE 2.” Each
user is informed of all ofthe submitted challenges using this
page, and the user is prompted to vote (submit anAR) on each
challenge using this page.
[0048] FIG. 10 is an illustration of a page displayed on the
screen ofthe network appliances of each user of the system.
The page shows the current ranking of all second challenges
received in response to the ?rst challenge.
[0049] FIG. 11 is an illustration ofa screen shot of what is
displayed on the screen of the network appliance of the
administrator ADMIN of the system. The ADMIN is being
prompted to post a ?rst challenge. The ADMIN types a
description ofthe challenge into text box 26 as illustrated, and
then selects the “POST” button 27. This causes the challenge
to be submitted to the system. The challenge in this example
is the highest rated of the second challenges submitted by
users in response to the ?rst challenge when the last challenge
ended. The title ofthis challenge is: “HOW CAN THE COM
PANY REDUCE THE POWER CONSUMPTION OF OUR
PRODUCTS?”
[0050] FIG. 12 is an illustration ofa screen shot of what is
then displayed to the users A-F of the system. The challenge
ofFIG. 11 is advertised to the users. The text box 28 presented
prompts the user to type an idea into the text box 28. The
user’s idea is a solution to the challenge proposed in FIG. 11.
After the user has entered his or her idea (which is the object
to be rated or “RO”), the user can then select the “SUBMIT”
button 29 to submit the RO to the system. This differs from the
initial challenge previously presented to the users in FIG. 7.
FIG. 7 prompted the users to enter a second challenge in
response to the ?rst challenge.
[0051] The manner in which the rating system operates in
rating and ranking the second challenges is similar to the
operation ofthe system when rating and ranking ideas. In one
novel aspect, each AR is multiplied by a weighting factor to
determine a corresponding effective rating (ER) for second
challenge. Rather than theARs ofchallenges being averaged
to determine a ranking of the challenges, the ERs of chal
lenges are averaged to determine a ranking ofchallenges. The
ERs regarding the challenges or ideas submitted by a particu
lar user are used to determine a quantity called the “reputa
tion” RP of the user. The reputation of a user is therefore
dependent upon what other users thought about challenges
submitted by the user. Such a reputation RP is maintained for
each user of the system. The weighting factor that is used to
determine an ER from an AR is a function of the reputation
RP of the user who submitted the AR. If the user who sub
mitted the AR had a higher reputation (RP is larger) then the
AR ofthe user is weighted more heavily, whereas ifthe user
who submitted the AR had a lower reputation (RPT is
smaller) then the AR of the user is weighted less heavily.
Aug. 7, 2014
[0052] In a second novel aspect, the weighting factor used
to determine an ER from an AR is also a function of a crowd
voting probability value PT. The crowd voting probability
value PT is a value that indicates the probability that the user
who submitted the AR acts with the crowd in generatingARs.
The crowd is the majority of a population that behaves in a
similar fashion. The probability value PT is determined by
applying the Bayes theorem rule and taking into account the
number of positive and negative votes. If the user who gen
erated the AR is determined to have a higher probability of
following the other users or voting with the crowd (PT is
closer to 1) then the AR is weighted more heavily, whereas if
the user who generated the AR is determined to have a lower
probability of following other users or voting with the crowd
(PT is closer to 0) then the AR is weighted less heavily.
[0053] In a third novel aspect, the weighting factor used to
determine an ER from anAR is a function ofthe freshness RF
ofthe AR. Ifthe AR is relatively old (RF is a large value) then
theAR is weighed less heavily, whereas iftheAR is relatively
fresh (RF is a small value) then the AR is weighed more
heavily.
[0054] In a fourth novel aspect, a decay value D is
employed in determining a user’s reputation. One component
ofthe user’ s reputation is an average ofERs submitted in the
current computing cycle. A second component of the user’s
reputation is a function ofa previously determined reputation
RPT—1 for the user from the previous computing cycle. The
component ofthe user’s reputation due to the priorreputation
RPT—1 is discounted by the decay value D. If the user was
relatively inactive and disengaged from the system then the
decay value D is smaller (not equal to 1 but a little less, for
example, D:0.998) and the impact ofthe user’s earlier repu
tation RPT—1 is discounted more, whereas ifthe user is rela
tively active and engaged with the system then the decay
value D is larger (for example, D:1) and the impact of the
user’s earlier reputation RP—1 is discounted less. As users
submit ARs and ROs and use the system, the reputations of
the users change. The network-based rating system is usable
to solicit and receive challenges from a group ofusers, and to
determine a ranking of the challenges to ?nd the challenge
that is likely the best. A ranking of challenges in order ofthe
highest average ofERs forthe challenge to the lowest average
of ERs for the challenge is maintained and is displayed to
users. At the end ofa challenge processing period, the highest
rated challenge is determined and that challenge is submitted
to users in an idea processing step. In other embodiments,
challenges submitted by users in the challenge processing
step can be submitted to users for ideas in the idea processing
step on an ongoing and constant basis. In these alternative
embodiments, the challenge rated in the idea processing step
may be a challenge other than the highest rated challenge
determined in the challenge processing step. In anothernovel
embodiment, the users rating challenges in the challenge
processing step are the same users rating ideas in the idea
processing step. In yet other embodiments, the users rating
challenges in the challenge processing step are not the same
users rating ideas in the idea processing step.
[0055] In this next section, the high level components of a
crowdplatform will be disclosed: High-Level Environmental
Inputs:
[0056] Work?ow
[0057] (Who Receives/Reviews Whatiie. Fusion) This
would be the default work?ow that could change or emerge
over time as the platform and the crowd evolve.
US 2014/0222524 A1
[0058] Decision Makers & Points of Interaction on the
Work?ow:
[0059] 1) Identify the decision makers and their level of
authority over certain topic areas, etc; 2) Identify what points
throughout the process require a manual decision; 3) Identify
whether the decision making roles can emerge from the
crowd over time.
[0060] Overall Budget
[0061] (For Reporting & Crowdfunding) Allow depart
ments to pledge certain budget amounts and how they would
like it dispersed (crowdfunding, prize-based, rapid prototyp
ing etc.)
[0062] Roles/Permissions of Crowd
[0063] (Level ofEmergent BehaviorAllowed) 1) The level
ofauthority the crowd has over the direction or strategy ofthe
platform (i.e. how the crowd shifts focus from a corporate
strategic initiative to something else that may be unrelated);
2) Crowd’s role for projects under a certain dollar threshold
and crowds role for projects over that dollar threshold.
[0064] Crowdfunding Type (Standard or Enterprise)
[0065] 1) Standard4Crowd pitches in; Enterprisei
Crowd helps decide where pre-existing budget goes.
[0066] Requirements to Validate Challenges
[0067] (Graduation Thresholds) 1) proposed challenge
becomes a solvable challenge (Work?ow); 2) Challenge
Attributes.
[0068] Requirements to Validate ldeas (Graduation
Thresholds) 1) idea validation process; 2) inputs and ideas
required for an idea;
[0069] Requirements to Validate Implemented Solutions
(Graduation Thresholds): How implemented solutions are
validated i) quantitatively or ii) qualtitatively by the crowd.
[0070] Points Trade-in Options (Tangible Bene?t for Par
ticipating)
[0071] For an explanation of how to make and use the
“network based rating system” disclosed in the description
above, see US. patent application Ser. No. 13/491,560,
entitled “User Reputation ln Social Network And Ecom
merce Rating Systems”, by Manas S. Hardas and Lisa S.
Purvis, ?led Jun. 7, 2012. The subject matter of application
Ser. No. 13/491 ,560 is incorporated herein by reference.
[0072] Although certain speci?c embodiments are
described above for instructional purposes, the teachings of
this patent document have general applicability and are not
limited to the speci?c embodiments described above.
Although a rating scale involving ratings of —1 and +1 is used
in the speci?c embodiment set forth above, otherrating scales
can be used. Users may, for example, submit ratings on an
integer scale offrom one to ten. The rating system need not be
a system for rating ideas, but rather may be a system for rating
suppliers ofproducts in an ecommerce application. The rating
system may be a system for rating products such as in a
consumer report type ofapplication. Although speci?c equa
tions are set forth above for how to calculate a user’s reputa
tion and for how to calculate an effective rating in one illus
trative example, the novel general principles disclosed above
regarding user reputations and effective ratings are not lim
ited to these speci?c equations. Although in the speci?c
embodiment set forth above a user is a person, the term user
is not limited to a person but rather includes automatic agents.
An example ofan automatic agent is a computer program like
a web crawler that generates ROs and submits the ROs to the
rating system. Accordingly, various modi?cations, adapta
tions, and combinations of various features of the described
Aug. 7, 2014
embodiments can be practiced without departing from the
scope of the invention as set forth in the claims.
What is claimed is:
1. A method comprising:
(a) publishing a ?rst challenge to a plurality of users,
wherein the ?rst challenge solicits the users to submit
second challenges for rating;
(b) storing in a database an initial reputation value for each
of the users;
(c) receiving a ?rst of the second challenges from a ?rst
user;
(d) receiving an actual rating for the ?rst of the second
challenges from a second user;
(e) determining an effective rating forthe ?rst ofthe second
challenges that is a function of the initial reputation
value for the second user and a probability that the
second user followed other users in generating the actual
rating;
(f) determining an updated reputation value for the second
user based on the effective rating;
(g) replacing the initial reputation value forthe second user
with the updated reputation value for the ?rst user in the
database;
(h) determining a ranking ofthe secondchallenges based in
part on the effective rating for the ?rst of the second
challenges;
(i) determining which of the second challenges has the
highest ranking; and
(j) publishing the second challenge having the highest
ranking to the plurality ofusers and soliciting ideas from
the plurality ofusers to solve the second challenge hav
ing the highest ranking.
2. The method of claim 1, wherein (a) through (j) are
performed by a rating system.
3. The method of claim 1, wherein the determining of (f)
involves multiplying the actual rating of (d) by a weighting
factor, and wherein the weighting factor is a function ofother
effective ratings, and wherein the other effective ratings are
ratings for second challenges submitted by the second user.
4. The method of claim 1, wherein the determining of (f)
involves averaging a plurality ofeffective ratings, wherein the
effective ratings that are averaged are effective ratings for one
or more challenges submitted by the ?rst user.
5. The method of claim 1, wherein the determining of (f)
involves multiplying the actual rating of (d) by a weighting
factor, and wherein the weighting factor is a function ofother
effective ratings, and wherein the other effective ratings are
ratings for rated objects submitted by the second user.
6. The method of claim 1, wherein the determining of (f)
involves multiplying the actual rating of (d) by a weighting
factor, and wherein the weighting factor is a function of a
reputation value for the second user.
7. The method of claim 1, wherein the determining of (f)
involves multiplying the actual rating of (d) by a weighting
factor, and wherein the weighting factor is a function of a
freshness of the actual rating.
8. The method of claim 1, wherein the determining of (f)
involves multiplying the actual rating of (d) by a weighting
factor, and wherein the weighting factor is a function of the
probability that the second user acts with the crowd in gen
erating actual ratings.
US 2014/0222524 A1
9. The method of claim 1, wherein the probability that the
second user acts with the crowd in generating actual ratings is
a probability given a general sentiment about the challenge
rated in (d).
10. The method ofclaim 1, wherein the determining ofthe
updated reputation value of (d) involves multiplying prior
reputation value for the ?rst user by a decay value.
11. The method ofclaim 1, further comprising the steps of:
(k) receiving a ?rst idea from the ?rst user;
(1) receiving an actual rating for the ?rst idea from the
second user;
(m) determining an effective rating for the ?rst idea that is
a function of the initial reputation value for the second
user and the probability that the second user followed
other users in generating the actual rating;
(n) determining an updatedreputationvalue forthe second
user based on the effective rating of the ?rst idea;
(0) replacing the initial reputation value for the second user
with the updated reputation value of (n) for the second
user in the database; and
(p) determining a ranking of ideas based in part on the
effective rating for the ?rst idea.
12. The method of claim 11, wherein (a) through (p) are
performed by a rating system, and wherein the ranking of
ideas determined in (p) is displayed by the rating system.
13. A method comprising:
(a) storing a database of rating information, wherein the
rating information includes a reputation value for a user
of a network-based rating system;
(b) receiving an actual rating of a challenge onto the net
work-based rating system, wherein the actual rating ofa
challenge is a rating of one of a plurality of rated chal
lenges;
(c) determining an effective rating based at least in part on
the actual rating and the reputation value stored in the
database;
(d) adding the effective rating into the database;
(e) determining a ranking of the plurality of rated chal
lenges based at least in part on effective ratings stored in
the database, wherein (a) through (e) are performed by
the network-based rating system; and
(f) publishing the challenge in (e) having the highest rank
ing to a plurality of users and soliciting ideas from the
plurality ofusers to solve the challenge having the high
est ranking.
14. The method ofclaim 13, wherein the rating information
stored in the database further includes a probability value,
wherein the probability value indicates a probability that a
user votes with a crowd when the user submits actual ratings,
and wherein the determining in (e) of the effective rating is
also based on the probability value.
15. The method ofclaim 13, wherein the determining of(e)
involves multiplying the actual rating by a weighting factor,
wherein the weighting factor is a function ofa probability that
a user votes with the crowd when the user submits actual
ratings.
Aug. 7, 2014
16. The method ofclaim 13, wherein the determining of(e)
involves multiplying the actual rating by a weighting factor,
whereinthe weighting factor is a function ofa freshness ofthe
actual rating.
17. The method ofclaim 13, wherein the determining of(e)
involves multiplying the actual rating by a weighting factor,
wherein the weighting factor is a function of the reputation
value.
18. The method of claim 13, wherein the reputation value
for the user was calculated by the network-based rating sys
tem based at least in part on an average of effective ratings.
19. The method of claim 13, wherein the reputation value
for the user was calculated by the network-based rating sys
tem based at least in part on an average ofeffective ratings for
challenges submitted by the user.
20. The method of claim 13, wherein the reputation value
for the user was calculated by the network-based rating sys
tem, and wherein the calculation of the reputation value
involved multiplying a prior reputation value by a decay
value.
21. The method of claim 13, wherein the network-based
rating system determines a reputation value for each of the
plurality of users, the method further comprising:
(f) determining a ranking ofthe users based at least in part
on the reputation values for the plurality ofusers.
22. A network-based rating system comprising:
means for storing a database ofrating information, wherein
the rating information includes a plurality of effective
ratings, wherein each effective rating corresponds to an
actual rating, wherein each actual rating is a rating ofone
of a plurality of rated challenges, wherein one of the
rated challenges was submitted by a ?rst user, and
wherein the rated information further includes a plural
ity of reputation values, wherein one of the reputation
values is a reputation value for a second user;
means for determining an effective rating corresponding to
an actual rating, wherein the actual rating was submitted
by the second user for the rated challenges submitted by
the ?rst user, wherein the effective rating is: l) a function
ofthe actual rating submitted by the second user, and 2)
a function ofthe reputation value for the second user;
means for determining and displaying the highest ranked
challenge from a ranking of the plurality of rated chal
lenges based at least in part on effective ratings stored in
the database; and
means for receiving actual ratings ofideas from a plurality
of users.
23. The network-based rating system of claim 22, wherein
the means for storing is a portion of a server that stores
database information, and wherein the means for determining
an effective rating, the means for determining and displaying
the highest ranked challenge, and the means for receiving
actual ratings of challenges are parts of a rating system pro
gram executing on the server.
* * * * *

More Related Content

Similar to Crowdsourcing Business Patent

Modern Elicitation Process
Modern Elicitation ProcessModern Elicitation Process
Modern Elicitation ProcessRajon
 
Failure Modes FMEA-&-Measurement_Systems_Analysis.ppt
Failure Modes FMEA-&-Measurement_Systems_Analysis.pptFailure Modes FMEA-&-Measurement_Systems_Analysis.ppt
Failure Modes FMEA-&-Measurement_Systems_Analysis.pptMadan Karki
 
VIRTUAL CLINIC: A CDSS ASSISTEDTELEMEDICINE FRAMEWORK
VIRTUAL CLINIC: A CDSS ASSISTEDTELEMEDICINE FRAMEWORKVIRTUAL CLINIC: A CDSS ASSISTEDTELEMEDICINE FRAMEWORK
VIRTUAL CLINIC: A CDSS ASSISTEDTELEMEDICINE FRAMEWORKIRJET Journal
 
Fundability Criteria Worksheet_MediCoventures_v3.1
Fundability Criteria Worksheet_MediCoventures_v3.1Fundability Criteria Worksheet_MediCoventures_v3.1
Fundability Criteria Worksheet_MediCoventures_v3.1Aaron Call
 
Requirements validation - requirements engineering
Requirements validation - requirements engineeringRequirements validation - requirements engineering
Requirements validation - requirements engineeringRa'Fat Al-Msie'deen
 
Mechanical Reliability Prediction: A Different Approach
Mechanical Reliability Prediction: A Different ApproachMechanical Reliability Prediction: A Different Approach
Mechanical Reliability Prediction: A Different ApproachHCL Technologies
 
Reliable rating system and method thereof
Reliable rating system and method thereofReliable rating system and method thereof
Reliable rating system and method thereofTal Lavian Ph.D.
 
an-advanced-liquid-cooling-design-for-data-center-final-v3-1-pdf.pdf
an-advanced-liquid-cooling-design-for-data-center-final-v3-1-pdf.pdfan-advanced-liquid-cooling-design-for-data-center-final-v3-1-pdf.pdf
an-advanced-liquid-cooling-design-for-data-center-final-v3-1-pdf.pdfbui thequan
 
Search ch 1 operations and supply_chain_management_revision_notes_
Search ch 1 operations and supply_chain_management_revision_notes_Search ch 1 operations and supply_chain_management_revision_notes_
Search ch 1 operations and supply_chain_management_revision_notes_sudipto das
 
Applying a New Generation of Prognostics Across the Industrial Internet
Applying a New Generation of Prognostics Across the Industrial InternetApplying a New Generation of Prognostics Across the Industrial Internet
Applying a New Generation of Prognostics Across the Industrial InternetSentient Science
 
Gaining Insights into Patient Satisfaction through Interpretable Machine Lear...
Gaining Insights into Patient Satisfaction through Interpretable Machine Lear...Gaining Insights into Patient Satisfaction through Interpretable Machine Lear...
Gaining Insights into Patient Satisfaction through Interpretable Machine Lear...IRJET Journal
 
Requirement Analysis & Specification sharbani bhattacharya
Requirement Analysis & Specification sharbani bhattacharyaRequirement Analysis & Specification sharbani bhattacharya
Requirement Analysis & Specification sharbani bhattacharyaSharbani Bhattacharya
 
Requirements Management Booklet Pages
Requirements Management Booklet PagesRequirements Management Booklet Pages
Requirements Management Booklet PagesTonda MacLeod
 
CTI Technical Advisory Committee (TAC) Meeting December 1, 2015
CTI Technical Advisory Committee (TAC) Meeting December 1, 2015CTI Technical Advisory Committee (TAC) Meeting December 1, 2015
CTI Technical Advisory Committee (TAC) Meeting December 1, 2015Credential Engine
 
Credit Risk Evaluation Model
Credit Risk Evaluation ModelCredit Risk Evaluation Model
Credit Risk Evaluation ModelMihai Enescu
 
Digital Twin based Product Development in Life Science Industry – Sustainable...
Digital Twin based Product Development in Life Science Industry – Sustainable...Digital Twin based Product Development in Life Science Industry – Sustainable...
Digital Twin based Product Development in Life Science Industry – Sustainable...Arindam Chakraborty, Ph.D., P.E. (CA, TX)
 
Monetizing Risks - A Prioritization & Optimization Solution
Monetizing Risks - A Prioritization & Optimization SolutionMonetizing Risks - A Prioritization & Optimization Solution
Monetizing Risks - A Prioritization & Optimization SolutionBlack & Veatch
 

Similar to Crowdsourcing Business Patent (20)

Modern Elicitation Process
Modern Elicitation ProcessModern Elicitation Process
Modern Elicitation Process
 
Failure Modes FMEA-&-Measurement_Systems_Analysis.ppt
Failure Modes FMEA-&-Measurement_Systems_Analysis.pptFailure Modes FMEA-&-Measurement_Systems_Analysis.ppt
Failure Modes FMEA-&-Measurement_Systems_Analysis.ppt
 
VIRTUAL CLINIC: A CDSS ASSISTEDTELEMEDICINE FRAMEWORK
VIRTUAL CLINIC: A CDSS ASSISTEDTELEMEDICINE FRAMEWORKVIRTUAL CLINIC: A CDSS ASSISTEDTELEMEDICINE FRAMEWORK
VIRTUAL CLINIC: A CDSS ASSISTEDTELEMEDICINE FRAMEWORK
 
Fundability Criteria Worksheet_MediCoventures_v3.1
Fundability Criteria Worksheet_MediCoventures_v3.1Fundability Criteria Worksheet_MediCoventures_v3.1
Fundability Criteria Worksheet_MediCoventures_v3.1
 
Requirements validation - requirements engineering
Requirements validation - requirements engineeringRequirements validation - requirements engineering
Requirements validation - requirements engineering
 
Mechanical Reliability Prediction: A Different Approach
Mechanical Reliability Prediction: A Different ApproachMechanical Reliability Prediction: A Different Approach
Mechanical Reliability Prediction: A Different Approach
 
Reliable rating system and method thereof
Reliable rating system and method thereofReliable rating system and method thereof
Reliable rating system and method thereof
 
Scada Analysis1.2
Scada Analysis1.2Scada Analysis1.2
Scada Analysis1.2
 
an-advanced-liquid-cooling-design-for-data-center-final-v3-1-pdf.pdf
an-advanced-liquid-cooling-design-for-data-center-final-v3-1-pdf.pdfan-advanced-liquid-cooling-design-for-data-center-final-v3-1-pdf.pdf
an-advanced-liquid-cooling-design-for-data-center-final-v3-1-pdf.pdf
 
Search ch 1 operations and supply_chain_management_revision_notes_
Search ch 1 operations and supply_chain_management_revision_notes_Search ch 1 operations and supply_chain_management_revision_notes_
Search ch 1 operations and supply_chain_management_revision_notes_
 
load_testing.ppt.pptx
load_testing.ppt.pptxload_testing.ppt.pptx
load_testing.ppt.pptx
 
Us8448146
Us8448146Us8448146
Us8448146
 
Applying a New Generation of Prognostics Across the Industrial Internet
Applying a New Generation of Prognostics Across the Industrial InternetApplying a New Generation of Prognostics Across the Industrial Internet
Applying a New Generation of Prognostics Across the Industrial Internet
 
Gaining Insights into Patient Satisfaction through Interpretable Machine Lear...
Gaining Insights into Patient Satisfaction through Interpretable Machine Lear...Gaining Insights into Patient Satisfaction through Interpretable Machine Lear...
Gaining Insights into Patient Satisfaction through Interpretable Machine Lear...
 
Requirement Analysis & Specification sharbani bhattacharya
Requirement Analysis & Specification sharbani bhattacharyaRequirement Analysis & Specification sharbani bhattacharya
Requirement Analysis & Specification sharbani bhattacharya
 
Requirements Management Booklet Pages
Requirements Management Booklet PagesRequirements Management Booklet Pages
Requirements Management Booklet Pages
 
CTI Technical Advisory Committee (TAC) Meeting December 1, 2015
CTI Technical Advisory Committee (TAC) Meeting December 1, 2015CTI Technical Advisory Committee (TAC) Meeting December 1, 2015
CTI Technical Advisory Committee (TAC) Meeting December 1, 2015
 
Credit Risk Evaluation Model
Credit Risk Evaluation ModelCredit Risk Evaluation Model
Credit Risk Evaluation Model
 
Digital Twin based Product Development in Life Science Industry – Sustainable...
Digital Twin based Product Development in Life Science Industry – Sustainable...Digital Twin based Product Development in Life Science Industry – Sustainable...
Digital Twin based Product Development in Life Science Industry – Sustainable...
 
Monetizing Risks - A Prioritization & Optimization Solution
Monetizing Risks - A Prioritization & Optimization SolutionMonetizing Risks - A Prioritization & Optimization Solution
Monetizing Risks - A Prioritization & Optimization Solution
 

More from Dustin Haisler

State of GovTech Market Briefing.pptx
State of GovTech Market Briefing.pptxState of GovTech Market Briefing.pptx
State of GovTech Market Briefing.pptxDustin Haisler
 
Updated for 2021 - 10 Laws of Government Sales & Marketing
Updated for 2021 - 10 Laws of Government Sales & MarketingUpdated for 2021 - 10 Laws of Government Sales & Marketing
Updated for 2021 - 10 Laws of Government Sales & MarketingDustin Haisler
 
What State & Local Governments Need to Know About the American Rescue Plan Ac...
What State & Local Governments Need to Know About the American Rescue Plan Ac...What State & Local Governments Need to Know About the American Rescue Plan Ac...
What State & Local Governments Need to Know About the American Rescue Plan Ac...Dustin Haisler
 
2021 State and Local Government Market Briefing
2021 State and Local Government Market Briefing2021 State and Local Government Market Briefing
2021 State and Local Government Market BriefingDustin Haisler
 
GovTech 100 - 2020 Recap Presentation
GovTech 100 - 2020 Recap PresentationGovTech 100 - 2020 Recap Presentation
GovTech 100 - 2020 Recap PresentationDustin Haisler
 
Future Ready: A Playbook for 2020 And Beyond
Future Ready: A Playbook for 2020 And BeyondFuture Ready: A Playbook for 2020 And Beyond
Future Ready: A Playbook for 2020 And BeyondDustin Haisler
 
The 2019 GovTech 100 + Market Overview Deck
The 2019 GovTech 100 + Market Overview DeckThe 2019 GovTech 100 + Market Overview Deck
The 2019 GovTech 100 + Market Overview DeckDustin Haisler
 
State of Gov Tech 2017
State of Gov Tech 2017State of Gov Tech 2017
State of Gov Tech 2017Dustin Haisler
 
Disruptive Innovations and Local Government Strategies for Embracing these In...
Disruptive Innovations and Local Government Strategies for Embracing these In...Disruptive Innovations and Local Government Strategies for Embracing these In...
Disruptive Innovations and Local Government Strategies for Embracing these In...Dustin Haisler
 
Government Experience Academy
Government Experience Academy Government Experience Academy
Government Experience Academy Dustin Haisler
 
Future of Special Districts
Future of Special DistrictsFuture of Special Districts
Future of Special DistrictsDustin Haisler
 
The Changing Face of Government IT
The Changing Face of Government ITThe Changing Face of Government IT
The Changing Face of Government ITDustin Haisler
 
Sacramento GovTech Social Academy Keynote
Sacramento GovTech Social Academy KeynoteSacramento GovTech Social Academy Keynote
Sacramento GovTech Social Academy KeynoteDustin Haisler
 
Indiana FirstNet Exponential Government Presentation
Indiana FirstNet Exponential Government PresentationIndiana FirstNet Exponential Government Presentation
Indiana FirstNet Exponential Government PresentationDustin Haisler
 
10 Laws of Government Sales & Marketing
10 Laws of Government Sales & Marketing10 Laws of Government Sales & Marketing
10 Laws of Government Sales & MarketingDustin Haisler
 
GovTech Explainer: Self-Driving Cars
GovTech Explainer: Self-Driving CarsGovTech Explainer: Self-Driving Cars
GovTech Explainer: Self-Driving CarsDustin Haisler
 
Exponential Government
Exponential GovernmentExponential Government
Exponential GovernmentDustin Haisler
 

More from Dustin Haisler (20)

State of GovTech Market Briefing.pptx
State of GovTech Market Briefing.pptxState of GovTech Market Briefing.pptx
State of GovTech Market Briefing.pptx
 
Updated for 2021 - 10 Laws of Government Sales & Marketing
Updated for 2021 - 10 Laws of Government Sales & MarketingUpdated for 2021 - 10 Laws of Government Sales & Marketing
Updated for 2021 - 10 Laws of Government Sales & Marketing
 
What State & Local Governments Need to Know About the American Rescue Plan Ac...
What State & Local Governments Need to Know About the American Rescue Plan Ac...What State & Local Governments Need to Know About the American Rescue Plan Ac...
What State & Local Governments Need to Know About the American Rescue Plan Ac...
 
2021 State and Local Government Market Briefing
2021 State and Local Government Market Briefing2021 State and Local Government Market Briefing
2021 State and Local Government Market Briefing
 
GovTech 100 - 2020 Recap Presentation
GovTech 100 - 2020 Recap PresentationGovTech 100 - 2020 Recap Presentation
GovTech 100 - 2020 Recap Presentation
 
Future Ready: A Playbook for 2020 And Beyond
Future Ready: A Playbook for 2020 And BeyondFuture Ready: A Playbook for 2020 And Beyond
Future Ready: A Playbook for 2020 And Beyond
 
The 2019 GovTech 100 + Market Overview Deck
The 2019 GovTech 100 + Market Overview DeckThe 2019 GovTech 100 + Market Overview Deck
The 2019 GovTech 100 + Market Overview Deck
 
Uberizing Government
Uberizing GovernmentUberizing Government
Uberizing Government
 
State of Gov Tech 2017
State of Gov Tech 2017State of Gov Tech 2017
State of Gov Tech 2017
 
Disruptive Innovations and Local Government Strategies for Embracing these In...
Disruptive Innovations and Local Government Strategies for Embracing these In...Disruptive Innovations and Local Government Strategies for Embracing these In...
Disruptive Innovations and Local Government Strategies for Embracing these In...
 
Government Experience Academy
Government Experience Academy Government Experience Academy
Government Experience Academy
 
Future of Special Districts
Future of Special DistrictsFuture of Special Districts
Future of Special Districts
 
The Changing Face of Government IT
The Changing Face of Government ITThe Changing Face of Government IT
The Changing Face of Government IT
 
Sacramento GovTech Social Academy Keynote
Sacramento GovTech Social Academy KeynoteSacramento GovTech Social Academy Keynote
Sacramento GovTech Social Academy Keynote
 
Indiana FirstNet Exponential Government Presentation
Indiana FirstNet Exponential Government PresentationIndiana FirstNet Exponential Government Presentation
Indiana FirstNet Exponential Government Presentation
 
10 Laws of Government Sales & Marketing
10 Laws of Government Sales & Marketing10 Laws of Government Sales & Marketing
10 Laws of Government Sales & Marketing
 
Exponential Planning
Exponential PlanningExponential Planning
Exponential Planning
 
GovTech Explainer: Self-Driving Cars
GovTech Explainer: Self-Driving CarsGovTech Explainer: Self-Driving Cars
GovTech Explainer: Self-Driving Cars
 
InsaneROI Academy
InsaneROI AcademyInsaneROI Academy
InsaneROI Academy
 
Exponential Government
Exponential GovernmentExponential Government
Exponential Government
 

Recently uploaded

AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfOverkill Security
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Zilliz
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024The Digital Insurer
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Orbitshub
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 

Recently uploaded (20)

AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 

Crowdsourcing Business Patent

  • 1. US 20140222524A1 (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2014/0222524 A1 Pluschkell et a]. (43) Pub. Date: Aug. 7, 2014 (54) CHALLENGE RANKING BASED ON USER (52) US. Cl. REPUTATION IN SOCIAL NETWORK AND cpc ................................ .. G06Q 10/0637 (2013.01) ECOMMERCE RATINGS USPC ....................................................... .. 705/7.36 (71) Applicant: Mindjet LLC, San Francisco, CA (US) (72) Inventors: Paul Pluschkell, Pleasanton, CA (US); (57) ABSTRACT Dustin W. Haisler, Elgin, TX (US); $5161 Charboneau’ Independence’ MO A network-based rating system provides a mechanism whereby users can submit and rate challenges in a challenge (73) Assignee: Mindjet LLC’ San Francisco, CA (Us) processing step and submit and rank ideas in an idea process ing step that correspond to the challenges in the challenge (21) Appl, No.1 13/959,733 processing step. The highest rated challenge is determined in the challenge processing step and is then submittedto users in (22) Filedi Aug- 5, 2013 an idea processing step. Alternatively, challenges submitted _ _ by users in the challenge processing step are submitted to Related U‘s‘ Apphcatlon Data users for ideas in the idea processing step on an ongoing basis. (60) provisional application No_ 61/679,747’ ?led on Aug In these alternative embodiments, the challenge rated in the 5, 2012~ idea processing step is a challenge other thanthe highest rated challenge determined in the challenge processing step. In Publication Classi?cation another embodiment, the users rating challenges are the same users rating ideas in the idea processing step and in other (51) Int. Cl. embodiments the users are not the same as those rating ideas G06Q 10/06 (2006.01) in the idea processing step. N ETWORK-BASED RATING SYSTEM 1 IIIEEMTRAL 3 SERVER’— wv. IIU
  • 2. Patent Application Publication Aug. 7, 2014 Sheet 1 0f 9 US 2014/0222524 A1 NETWORK-BASED RATING SYSTEM 1 le // / FIG. 1
  • 3. Patent Application Publication Aug. 7, 2014 Sheet 2 0f 9 US 2014/0222524 A1 START 40 / CHALLENGE PROCESSING “70 CHALLENG PROCESSING COMPLETE? NO IDEA PROCESSING T100 IDEA PROCESSING COMPLETE? 115 IDEA ,120 IMPLEMENTATION IMPLEMENTATION 125 IDEA ,130 MEASUREMENT IDEA MEASUREMENT COMPLETE? 135
  • 4. Patent Application Publication Aug. 7, 2014 Sheet 3 0f 9 US 2014/0222524 A1 START i CHALLENGE PROCESSING -" 70 CHALLENGE PROCESSING COMPLETE? NO 85 YES IDEA PROCESSING '100 IDEA PROCESSING COMPLETE? 115 FIG. 3
  • 5. Patent Application Publication Aug. 7, 2014 Sheet 4 0f 9 US 2014/0222524 A1 START / 7° DISPLAY A FIRST CHALLENGE TO THE USERS OF A RATING SYSTEM THAT SOLICITS THE USERS TO SUBMIT SECOND CHALLENGES FOR RATING. ‘“ 71 I STORE AN INITIAL REPUTATION VALUE FOR EACH OF THE USERS I“ 72 I I RECEIVE A FIRST OF THE SECOND CHALLENGES FROM A FIRST USER '— 73 I IRECEIVE AN ACTUAL RATING OF THE SECOND CHALLENGE FROM A SECOND USERI— 74 I FOR EACH ACTUAL RATING, GENERATING A CORRESPONDING EFFECTIVE CHALLENGE RATING (ER), WHERE HOW THE AR IS ADJUSTED IS A FUNCTION OF: A) THE REPUTATION (RP) OF THE USER WHO SUBMITTED THE AR, ,_ 75 B) THE FRESHNESS (RF) OF THE AR, AND C) A PROBABILITY THAT THE USER WHO GENERATED THE ACTUAL RATING ACTS WITH THE CROWD IN GENERATING ACTUAL RATINGS (PT). I DETERMINE UPDATED REPUTATION VALUE BASED ON THE EFFECTIVE VALUE ’76 I REPLACE THE INITIAL REPUTATION VALUE WITH THE UPDATED REPUTATION VALUE-77 78 END OF COMPUTING CYCLE? NO DETERMINE A RANKING OF THE SECOND CHALLENGES BASED IN PART ON THE EFFECTIVE RATING. m 79 I DETERMINE WHICH OF THE SECOND CHALLENGES HAS THE HIGHEST RATING. '— 80 I PUBLISH THE CHALLENGE WITH THE HIGHEST RATING TO USERS OF THE RATING SYSTEM AND SOLICIT IDEAS FROM THE USERS TO SOLVE THE HIGHEST RANKED "‘ 81 CHALLENGE @ FIG. 4
  • 6. Patent Application Publication Aug. 7, 2014 Sheet 5 0f 9 US 2014/0222524 A1 WT we §_li 1 K/ POST rg‘x CHALLENGE TE] THE USER3 QF THE $YSTEM‘. ADVERT-58E A REWARD. A USER SUBMBTS AN OBJECT TC: BE RATED {RU}. W182 A USER RATES AN RQ OF ANOTHER USER B‘s” SUBMiTTiNG AN ACTUAL Q“ 133 RATENG {AR} FUR THAT RU. i ADJUQT EACH ACTUAL RATTNG {AR} THEREBY GEMERATTNG AN CORRESPDNDENG EFFECTWE RATTNG (ER): WHERE HGW THE AR i5 ADJUSTED is A PUNCH—TON SE: A} THE REFUTATiQN {RP} GF THE USER w“ 164 WHO SUBMETTED THE AIR, 8') THE FRESHNESS {RF} {3F THE AR, AND C} A PRC’BABTLTTY THAT THE USER WHO GENERé‘sTED THE ACTUAL RATTNG ACTS WWH THE CRGWD iN GENERATING ACTUAE RATSNGS {PT}. i REDETEI‘RMTNE THE REPUTATiDN (RP) OF THE USER WHO SUBMH'TED - 105 THE RO. v‘/"' 106 —*€Tl cmwuma CYCLE? f) ERR ff, RANK USERS ACCORDTNG TQ USER REPUTATTGN {RP}. DTSPLAY 5*, W 167 RANKTNG OF USERS. FOR EACH RC3 USE THE ERS SUBMTTTED FOR THAT RD TO RANK THE W 108 RC3. DISPLAY A RANKTNG OF ROS. ’ ixM)» “iii fit-PHALLENGE OVER ? ‘l’f,GRANT REWARS To THE USER WHCI SUEMUTED THE Hafiz—TEST RANKED RC). “" “0
  • 7. Patent Application Publication Aug. 7, 2014 Sheet 6 0f 9 US 2014/0222524 A1 POST 21 INITIAL CHALLENGE: HOW DO WE MAKE OUR 20 PRODUCTS MORE ENVIRONMENTALLY FRIENDLY? SYSTEM ADMINISTRATOR OR EXECUTIVE SPONSOR POSTS AN INITIAL OR FIRST CHALLENGE FIG. 6 INITIAL CHALLENGE: HOW DO WE MAKE OUR PRODUCTS MORE ENVIRONMENTALLY SUBMIT FRIENDLY? 23 TYPE YOUR RESPONSIVE CHALLENGE HERE ~22 THE INITIAL CHALLENGE AS PRESENTED TO A USER FIG. 7
  • 8. Patent Application Publication Aug. 7, 2014 Sheet 7 0f 9 US 2014/0222524 A1 INITIAL CHALLENGE: HOW DO WE MAKE OUR PRODUCTS MORE ENVIRONMENTALLY FRIENDLY? 23 HOW CAN THE COMPANY REDUCE THE POWER 22 CONSUMPTION OF ITS J PRODUCTS? USER TYPES IN THE USER'S RESPONSIVE THE USER SUBMITS A CHALLENGE As HOW TO RESPONSIVE CHALLENGE (RC) MAKE THE|R PRODUCTS MORE ENVIRONMENTALLY FRIENDLY. FIG. 8 INITIAL CHALLENGE: HOW DO WE MAKE OUR PRODUCTS MORE ENVIRONMENTALLY FRIENDLY? RESPONSIVE CHALLENGE 12 HOW CAN THE COMPANY... RESPONSIVE CHALLENGE 22 24 XXXXXXXXXXXXXXXXXX +1 —1 RESPONSIVE CHALLENGE 3: XXXXXXXXXXXXXXXXXX +1 -1 THE USER SUBMITS AN ACTUAL RATING FOR EACH RC SUBMITTED (AR) TO RATE THE RESPONSIVE BY ANOTHER USER' THE USER CAN SUBMIT AN AR CHALLENGE OF ANOTHER BY SELECT|NG AN AR VALUE OF -1 OR +1 FIG. 9
  • 9. Patent Application Publication Aug. 7, 2014 Sheet 8 0f 9 US 2014/0222524 A1 INITIAL CHALLENGE: HOW DO WE MAKE OUR PRODUCTS MORE ENVIRONMENTALLY FRIENDLY? RANK OF RESPONSIVE CHALLENGES: RESPONSIVE CHALLENGE 1: HOW CANRESPONSIVE CHALLENGE 2: XXXXXXXRESPONSIVE CHALLENGE 3: XXXXXXXRESPONSIVE CHALLENGE 4: XXXXXXXRESPONSIVE CHALLENGE 5: XXXXXXXRESPONSIVE CHALLENGE 6: XXXXXXXRESPONSIVE CHALLENGE 7: XXXXXXX DISPLAY OF THE CURRENT RANKING OF RESPONSIVE CHALLENGES FIG. 10 CHALLENGE: 21 HOW CAN THE COMPANY REDUCE THE POWER 20 CONSUMPTION OF OUR T" PRODUCTS? SYSTEM ADMINISTRATOR POSTS THE HIGHEST RANKED RESPONSIVE CHALLENGE AS A NEW CHALLENGE FIG. 11
  • 10. Patent Application Publication Aug. 7, 2014 Sheet 9 0f 9 US 2014/0222524 A1 CHALLENGE: HOW CAN THE COMPANY REDUCE THE POWER CONSUMPTION OF OUR PRODUCTS? REWARD = $1000.00 TYPE YOUR IDEA HERE ~22 23 THE CHALLENGE AS PRESENTED TO A USER FIG. 12
  • 11. US 2014/0222524 A1 CHALLENGE RANKING BASED ON USER REPUTATION IN SOCIAL NETWORK AND ECOMMERCE RATINGS CROSS REFERENCE TO RELATED APPLICATION [0001] This application claims the bene?t under 35 U.S.C. §119 from provisional US. patent application Ser. No. 61/679,747, entitled “Challenge Ranking Based on User Reputation in Social Network and Ecommerce Rating Sys tems,” ?led on Aug. 5, 2012, the subject matter of which is incorporated herein by reference. BACKGROUND INFORMATION [0002] Identifying and solving strategic challenges are critical activities that can greatly in?uence an organization’s success. Examples of strategic challenges that many compa nies experience include what new products or product fea tures are required by the marketplace. Other companies have challenges regarding how to reduce costs or how to raise capital. The ability of organizations to identify these chal lenges can be determinative of the organization’s success. This ability can also be a competitive advantage for the orga nization ifthe company is able to identify and solves strategic challenges fasterthanits competitors. Many companies, how ever, do not use ef?cient processes for identifying and solving strategic challenges. [0003] Companies also tend to rely on a few individuals to identify and solve the organization’s strategic challenges. Companies may also employ serial approaches to identify these challenges. For example, a corporation may rely on a single executive sponsor to identify and propose a strategic challenge to all other employees of the company. After this ?rst challenge has been solved, the sponsormay then identify and propose subsequent challenges. Because this process is inef?cient and does not utilize more employees in the chal lenge identi?cation process, a better methodology is desired. SUMMARY [0004] A network-based rating system provides a mecha nism whereby users can submit and rate challenges in a chal lenge processing step and submit and rank ideas in an idea processing step that correspondto the challenges submittedin the challenge processing step. In one embodiment the highest rated challenge is determined inthe challenge processing step and then that challenge is submitted to users in an idea pro cessing step. In other embodiments, challenges submitted by users in the challenge processing step can be submitted to users for ideas in the idea processing step on an ongoing and constant basis. In these alternative embodiments, the chal lenge rated in the idea processing step is a challenge other than the highest rated challenge determined in the challenge processing step. In yet another embodiment, the users rating challenges in the challenging processing step are the same users rating ideas inthe idea processing step. In other embodi ments the users rating challenges in the challenging process ing step are not the same users rating ideas in the idea pro cessing step. [0005] The manner in which the rating system operates in rating andranking the challenges is similarto the operation of the system when rating and ranking ideas. In one novel aspect, a user provides an actual rating (AR) of a second challenge submitted by another user in response to a ?rst Aug. 7, 2014 challenge. The AR is multiplied by a weighting factor to determine a corresponding effective rating (ER) for second challenge. Rather than the ARs ofchallenges being averaged to determine a ranking of the challenges, the ERs of chal lenges are averaged to determine a ranking ofchallenges. The ERs regarding the challenges submitted by a particular user are used to determine a quantity called the “reputation” RP of the user. The reputation ofa user is therefore dependent upon what other users thought about challenges submitted by the user. Such a reputation RP is maintained for each user ofthe system. The weighting factor that is used to determine an ER from anAR is a function ofthe reputation RP ofthe user who submitted the AR. If the user who submitted the AR had a higher reputation (RP is larger) then the AR of the user is weighted more heavily, whereas ifthe user who submitted the AR had a lowerreputation (RPT is smaller) then theAR ofthe user is weighted less heavily. [0006] In a second novel aspect, the weighting factor used to determine an ER from an AR is also a function of a crowd voting probability value PT. The crowd voting probability value PT is a value that indicates the probability that the user who submitted the AR acts with the crowd in generatingARs. The crowd is the majority of a population that behaves in a similar fashion. The probability value PT is determined by applying the Bayes theorem rule and taking into account the number of positive and negative votes. If the user who gen erated the AR is determined to have a higher probability of voting with the crowd (PT is closer to 1) then the AR is weighted more heavily, whereas ifthe userwho generated the AR is determined to have a lower probability of voting with the crowd (PT is closer to 0) then the AR is weighted less heavily. [0007] In a third novel aspect, the weighting factor used to determine an ER from anAR is a function ofthe freshness RF ofthe AR. Ifthe AR is relatively old (RF is a large value) then theAR is weighed less heavily, whereas iftheAR is relatively fresh (RF is a small value) then the AR is weighed more heavily. [0008] In a fourth novel aspect, a decay value D is employed in determining a user’s reputation. One component ofthe user’ s reputation is an average ofERs submitted in the current computing cycle. A second component of the user’s reputation is a function ofa previously determined reputation RPT—1 for the user from the previous computing cycle. The component ofthe user’s reputation due to the priorreputation RPT—1 is discounted by the decay value D. If the user was relatively inactive and disengaged from the system then the decay value D is smaller (not equal to 1 but a little less, for example, D:0.998) and the impact ofthe user’s earlier repu tation RPT—1 is discounted more, whereas ifthe user is rela tively active and engaged with the system then the decay value D is larger (for example, D:1) and the impact of the user’s earlier reputation RP—1 is discounted less. As users submit ARs and ROs and use the system, the reputations of the users change. The network-based rating system is usable to solicit and extract challenges from a group ofusers, and to determine a ranking of the challenges to ?nd the challenge that is likely the best. A ranking of challenges in order ofthe highest average ofERs forthe challenge to the lowest average of ERs for the challenge is maintained and is displayed to users. At the end ofa challenge processing period, the highest rated challenge is determined and that challenge is submitted to users in the idea processing step.
  • 12. US 2014/0222524 A1 [0009] Further details and embodiments and techniques are described in the detailed description below. This summary does not purport to de?ne the invention. The invention is de?ned by the claims. BRIEF DESCRIPTION OF THE DRAWINGS [0010] The accompanying drawings, where like numerals indicate like components, illustrate embodiments of the invention. [0011] FIG. 1 is a drawing of a network-based rating sys tem. [0012] FIG. 2 is a ?owchart of an innovation process in accordance with one novel aspect. [0013] FIG. 3 is a ?owchart ofa method ofoperation ofthe network based rating system 1 in accordance with one novel aspect. [0014] FIG. 4 is a ?owchart ofa method ofoperation ofthe network based rating system 1 for receiving, rating, and selecting challenges. [0015] FIG. 5 is a ?owchart ofa method ofoperation ofthe network based rating system 1 for receiving, rating, and selecting ideas. [0016] FIG. 6 is an illustration ofa screen shot of what is displayed on the screen ofa network appliance when a system administrator or executive sponsor is posting a ?rst challenge. [0017] FIG. 7 is an illustration of a screen shot of how the ?rst challenge is presented to users of the system. [0018] FIG. 8 is an illustration of a page displayed on the screen ofa user’s network appliance after the user has entered a second challenge into the page but before the user has selected the “SUBMIT” button. [0019] FIG. 9 is an illustration ofa page that displays chal lenges to the users of the system and solicits the users to submit actual ratings (ARs). [0020] FIG. 10 is an illustration of a page that displays a ranking of second challenges that have been submitted by users in response to a ?rst challenge. [0021] FIG. 11 is an illustration ofa screen shot of what is displayed on the screen of the network appliance of the administrator ADMIN when the ADMIN is posting a chal lenge soliciting ideas to solve the highest ranked challenge from FIG. 10. [0022] FIG. 12 is an illustration ofa screen shot ofhow the highest ranked challenge is presented to users of the system before the user submits the user’s idea for solving the chal lenge. DETAILED DESCRIPTION [0023] Reference will now be made in detail to some embodiments of the invention, examples of which are illus trated in the accompanying drawings. [0024] FIG. 1 is a diagram of a network-based rating sys tem 1 in accordance with one novel aspect. Each ofthe users A-F uses an application (for example, a browser) executing on a networked appliance to communicate via network 8 with a rating system program 9 executing on a central server 10. Rating system program 9 accesses and maintains a database 20 of stored rating information. Blocks 2-7 represent net worked appliances. The networked appliance of a user is typically a personal computer or cellular telephone or another suitable input/output device that is coupled to communicate with network 8. Each network appliance has a display that the user ofthe network appliance can use to view rating informa Aug. 7, 2014 tion. The network appliance also provides the user a mecha nism such as a keyboard or touchpad or mouse for entering information into the rating system. [0025] Network 8 is typically a plurality of networks and may include a local area network and/or the internet. In the speci?c example described here, a company is interested in producing environmentally sustainable or ecologically “friendly” products wants to effectively and ef?ciently iden tify several strategic challenges associated with producing these ecologically friendly products. The users A-F are employees of the company and may include an executive sponsor. The network 8 is an intra-company private computer network maintained by the company for communication betweenemployees whenperforming companybusiness. The rating system program 9 is administered by the network administratorADMIN ofthe company network 8. The admin istratorADMIN interacts with network 8 and central server 9 via network appliance 11. [0026] FIG. 2 is a ?owchart of an innovation process. In step 70, the challenge processing step, challenges are identi ?ed, validated, and selected. In step 70, an executive sponsor or employee of a company may identify a certain challenge such as a recurring point offrustration ofone or several ofthe company’s present or future customers. During validation of the challenge, employees of the company may vote, re?ne, rate and rank the challenges of other employees. After a challenge is selected as a result of the validation step, the challenge is queued for idea submissions. In a step 85, a determination ofwhether challenge processing is completed is made. If further challenge processing is required, step 70 continues and ifnot, idea processing begins. [0027] In the idea processing step 100, the challenge selected in the challenge processing step is opened to the employees of the company for idea submissions. Any ideas submitted then proceed through validation, evolution, selec tion, and funding processes within step 100. For example, the employees of the company may submit ideas to solve the challenge selected in step 70. Ideas are reviewed by the employees or “crowd” through voting, rating, ranking, page views, and other mechanisms. The ranked ideas are passed through a process that re?nes the original concepts for those ideas and then the top ideas are selected and the idea funding process within step 115 occurs. During the idea funding pro cess, employees have an opportunity to select ideas on which to spend pre-budgeted innovation funds. Next a determina tion is made in step 115 that idea processing is complete. Ifthe selection has not been made then further idea processing can continue. In alternative embodiments, the process can return to the challenge processing stage for further identi?cation, validation, and selection of challenges. [0028] In the idea implementation step, step 120, the funded ideas are implemented. Once the ideas are imple mented, then a determination is made whether the ideas implementation step is complete in a step 125. If the idea implantation is complete then an idea measurement step 130 commences. If idea implementation is not complete, the idea implementation phase can continue. In alternative embodi ments, the process can return to the idea processing step 100 or the challenge processing step 70 after step 120. [0029] In an idea measurement step 130, the employees review whether the implemented ideas of the previous step, solved the original challenge submitted during the challenge processing step. If a determination is made in a step 135 that idea measurement has completed then step 70, the challenge
  • 13. US 2014/0222524 A1 processing step, can be repeated. Ifnot, further idea measure ment may continue. In one novel embodiment, the innovation process ofFIG. 2 can begin or continue at any point within the process. [0030] FIG. 3 is a ?owchart of a method 50 involving the operation ofthe network-based rating system ofFIG. 1. FIG. 3 includes a portion 50 ofthe innovation process ofFIG. 2. In a challenge processing step (step 70) a network-based rating system is used i) to propose a ?rst challenge to users of the rating system; ii) to rate second challenges received in response to the ?rst challenge; and iii) select a ranked chal lenge. Once the challenge processing step is complete (step 80), ideas for solving the challenges identi?ed in step 70 are submitted, rated, and selected in an idea processing step (step 100). Once the idea processing step is complete, subsequent steps (not shown) may occur. Process steps that include fund ing or implementation of the ideas selected in the idea pro cessing step are examples of subsequent processing steps. In an alternative embodiment, the idea and challenge processing steps occur simultaneously. The challenge processing step (step 70) and the idea processing step (step 100) will now be discussed in more detail. [0031] FIG. 4 is a ?owchart of a method 70 involving an operation ofthe network-based rating system 1 ofFIG. 1. The administrator ADMIN or an executive sponsor interacts with the rating system program 9, thereby causing a ?rst challenge to be displayed to users A-F of the rating system of FIG. 1. This ?rst challenge solicits the users to submit second chal lenges that will then be rated by the users A-F. The ?rst challenge may be a solicitationto submit challenges for de?n ing new products orproductroadmaps forthe company, prod uct features, or any other strategic issue relating to the com pany or the marketplace in whichthe company competes with other organizations. Through the system, each user is noti?ed ofthe initial challenge via the user’ s networked appliance. In the present example, the initial challenge is titled “HOW CAN WE MAKE OUR PRODUCTS MORE ENVIRON MENTALLY FRIENDLY?” The web page that presents the ?rst challenge to a user also includes a text ?eld. The web page solicits the user to enter into the text ?eld the user’s challenge, or a second challenge, in response to the ?rst challenge proposed to the users. [0032] In the method of FIG. 4, a user views the challenge web page and in response types the second challenge into the text box. The user’s challenge may be, respective to that speci?c user, a challenge that they believe is most relevant to meeting the corporate objectives framed by the ADMIN or the executive sponsor in the ?rst challenge. In this example, the ?rst of the second challenges submitted by a user of the rating system is “HOW CAN THE COMPANY REDUCE THE POWER CONSUMPTION OF OUR PRODUCTS.” After typing the ?rst of the second challenges in response to the ?rst challenge, the user selects a “SUBMIT” button on the page, thereby causing the second challenge to be submitted to and received by (step 73) the rating system. Multiple such responsive or second challenges can be submitted by multiple users in this way. An individual user may submit more then one responsive challenge if desired. As these second chal lenges are submitted and received by the rating system, a list ofall the submitted challenges is presented to the users ofthe system. A user can read the challenges submitted by other users, consider the merits of those second challenges, and then submit ratings corresponding to those challenges. The rating is referred to here as an “actual rating” or an “AR”. In Aug. 7, 2014 the present example, along with each responsive challenge displayed to the user, is a pair of buttons. The ?rst button is denoted “—l”. The user can select this button to submit a negative rating or a “no” vote for the second challenge. The second button is denoted “+1”. The user can select this button to submit a positive rating or a “yes” vote for the responsive challenge. In the method ofFIG. 4, the user selects the desired button, thereby causing the actual rating to be submitted to and received (step 74) by the system. Before the user submits the AR, the user cannot see the number of +1 ARs and the number of —l ARs that the second challenge has received. This prohibits the user from being in?uenced by how others have voted on the rated object. The system records the AR in association with the responsive challenge to which the AR pertains. Multiple ARs are collected in this way for every challenge from the various users of the system. [0033] Rather than just using the raw ARs to determine a consensus of what the users think the best challenge is, each AR is multiplied by a rating factor to determine an adjusted rating referred to as an “Effective Rating” or an “ER” (step 75). How theAR is adjusted to determine the associated ER is a function of: A) a previously determined reputation (RP) of the user who submitted the AR, B) the freshness (RF) ofthe AR, and C) a probability that the user who generated the AR acts with the crowd in generating ARs. The details ofhow an ER is determined from an AR is described in further detail below. [0034] The Reputation (RP) of a user is used as an indirect measure of how good the challenges submitted the user tend to be. The user’s reputation is dependent upon ERs derived from the ARs received from other users regarding the chal lenges submitted by the user and the rating system sets an initial reputation for each user (step 72). Accordingly, in the example of FIG. 4, after a new actual rating AR is received regarding the second challenge ofa user, the reputation ofthe user is updated (step 76) and replaces that user’s initial repu tation (step 77). Ifthe current computing cycle has not ended, then processing returns to step 73 and new challenges sub mitted by users in response to the ?rst challenge may be received into the system. Users may submit actual ratings on various ones of the second challenges. Each time an actual rating is made, the reputation of the user who generated the second challenge is updated. [0035] After the replacing of the initial reputation value with the updated reputation value of step 77 has been per formed, then the next computing cycle starts and processing returns to step 73 as indicated in FIG. 4. Operation of the rating system proceeds through steps 73 through 77, from computing cycle to computing cycle, with second challenges being submitted and actual ratings on the second challenges being collected. Each actual rating is converted into an effec tive rating, and the effective ratings are used to update the reputations ofthe users as appropriate. After a certain amount of time, the system determines (step 78) that the challenge period is over. [0036] At the end of the computing cycle (step 78), pro cessing proceeds to step 79. For each second challenge, the effective ratings for that second challenge are used to deter mine a rank (step 79) of the challenge with respect to other challenges. A determination ofthe highest ranked challenge is also made (step 80). The ranking ofall challenges submit ted is also displayed to the usersA-F. Inthe illustrated speci?c embodiment, step 79 occurs near the end of each computing
  • 14. US 2014/0222524 A1 cycle. In otherembodiments, the ranking ofchallenges can be done on an ongoing and constant basis. Computing cycles can be of any desired duration. [0037] FIG. 5 is a ?owchart of the method 100 of FIG. 3 involving an operation ofthe network-based rating system 1 ofFIG. 1. The administratorADMIN interacts with the rating system program 9, thereby causing the highest ranked chal lenge to be posted (step 101) to the users A-F of the system. The highest ranked challenged was determined by the chal lengeprocessing step (step 80) ofFIG. 3. Throughthe system, each user is noti?ed ofthe challenge via the user’ s networked appliance. In the present example, the challenge is titled “HOW CAN THE COMPANY REDUCE THE POWER CONSUMPTION OF OUR PRODUCTS?” The web page that presents this challenge to a user also includes a text ?eld. The web page solicits the user to type the user’s idea into the text ?eld. [0038] In the method ofFIG. 5, a user views this challenge advertising web page and in response types the user’s idea into the text box. The user’s idea is an object to be rated or a “rated object” in this step. After typing the idea for how the company can reduce power consumption in its products into the text box, the user selects a “SUBMIT” button on the page, thereby causing the Rated Object (RO) to be submitted (step 102) to the rating system. Multiple such ROs are submittedby multiple users in this way. An individual user may submit more than one RO if desired. As ROs are submitted, a list of all the submitted ROs is presented to the users ofthe system. A user can read the rated objects (ROs) submitted by other users, consider the merits of those ROs, and then submit ratings corresponding to those ROs in a manner similar to that in which second challenges are rated in the challenge pro cessing step 70 of FIG. 3. The rating is referred to here as an “actual rating” or an “AR”. Inthe present example, along with each idea displayed to the user, is a pair of buttons. The ?rst button is denoted “—l”. The user can select this button to submit a negative rating or a “no” vote for the idea. The second button is denoted “+1”. The user can select this button to submit a positive rating or a “yes” vote for the idea. In the method ofFIG. 3, the user selects the desired button, thereby causingtheActual RatingARto be submitted (step 103) to the system. Before the user submits the AR, the user cannot see the number of +1 ARs and the number of —l ARs the RO has received. This prohibits the user from being in?uenced by how others have voted on the RO. The system records the AR in association with the RO (the rated object) to which the AR pertains. Multiple ARs are collected in this way for every RO from the various users of the system. [0039] Rather than just using the raw ARs to determine a consensus of what the users think the best submitted idea is, each AR is multiplied by a rating factor to determine (step 104) an adjusted rating referred to as an “Effective Rating” or an “ER”. How the AR is adjusted to determine the associated ER is a function of: A) a previously determined reputation (RP) ofthe user who submitted theAR, B) the freshness (RF) oftheAR, and C) a probability that the userwho generated the AR acts with the crowd in generatingARs. The details ofhow an ER is determined from anAR is described in further detail below. [0040] The reputation (RP) of a user is used as an indirect measure of how good ROs of the user tend to be. The user’s reputation is dependent upon ERs derived from the ARs received from other users regarding the ROs submitted by the user. Accordingly, in the example ofFIG. 5, after a new actual Aug. 7, 2014 ratingAR is received regarding the rated object (the RO) ofa user, the reputation of the user is redetermined (step 105). If the current computing cycle has not ended, then processing returns to step 102. New rated objects may be received into the system. Users may submitARs onvarious ones ofthe ROs displayed to the users. Each time an AR is made, the reputa tion of the user who generated the RO is updated. [0041] At the end of the computing cycle (step 106), pro cessing proceeds to step 107. The system determines a rank ing ofthe users (step 107) based on the reputations (RP) ofthe users at that time. The ranking ofusers is displayed to all the users A-F. In addition, for each RO the ERs for that RO are used to determine a rank (step 108) ofthe RO with respect to other ROs. The ranking ofall ROs submitted is also displayed to the users A-F. In the illustrated speci?c embodiment, steps 107 and 108 occur at the end of each computing cycle. In other embodiments, the ranking of users and the ranking of ROs can be done on an ongoing constant basis. Computing cycles can be of any desired duration. [0042] After the rankings of steps 107 and 108 have been performed, then the next computing cycle starts and process ing returns to step 102 as indicated in FIG. 5. Operation ofthe rating system proceeds through steps 102 through 109, from computing cycle to computing cycle, with ROs being submit ted andARs on the ROs being collected. EachAR is converted into an ER, and the ERs are used to update the reputations of the users as appropriate. The ranking of users is displayed to all the users ofthe system in order to provide feedback to the users and to keep the users interested and engaged with the system. The public ranking ofusers incentivizes the users to keep using the system and provides an element of healthy competition. [0043] After a certain amount of time, the system deter mines (step 109) that the challenge period is over. In the illustrated example, the highest ranked idea (highest ranked RO) is determined to be the winner ofthe challenge. The user who submittedthat highest ranked RO is alerted by the system that the user has won the reward (step 110) for the best idea. The public nature of the reward and the public ranking of users and the public ranking of ideas is intended to foster excitement and competition and future interest in using the rating system. [0044] FIG. 6 is an illustration of a screen shot of what is displayed on the screen of the network appliance of the administrator ADMIN of the system. The ADMIN is being prompted to post a ?rst challenge. The ADMIN types a description ofthe challenge into text box 20 as illustrated, and then selects the “POST” button 21. This causes the ?rst chal lenge to be submitted to the system. Alternatively, an execu tive sponsor may also post and submit the ?rst challenge. [0045] FIG. 7 is an illustration of a screen shot of what is then displayed to the users A-F of the system. The initial challenge is advertised to the users. The text box 22 presented prompts the user to type a second challenge in response to the ?rst challenge into the text box 22. After the user has entered the responsive challenge, the user can then select the “SUB MIT” button 23 to submit the second challenge to the system. [0046] FIG. 8 is an illustration of a page displayed on the screen of a user’s network appliance. The user has entered a second challenge (has typed in a challenge that the user feels best addresses the ?rst challenge ofhow to make the compa ny’s products more environmentally friendly) into the text box 22 before selecting the “SUBMIT” button 23. In this
  • 15. US 2014/0222524 A1 example, the user’s challenge is “How can the company reduce the power consumption ofits products?” [0047] FIG. 9 is an illustration of a page displayed on the screen of the network appliance of each user of the system. The page shows each user submitted second challenge as of the time of viewing. For each second challenge, the user is presented an associated “—1” selectable button and an asso ciated “+1” selectable button. For example, if the user likes the challenge listed as “CHALLENGE 2”, then the user can select the “+1” button 24 to the right of the listed “CHAL LENGE 2”, whereas if the user does not like the challenge listed as “CHALLENGE 2” then the user can select the “—” button 25 to the right of the listed “CHALLENGE 2.” Each user is informed of all ofthe submitted challenges using this page, and the user is prompted to vote (submit anAR) on each challenge using this page. [0048] FIG. 10 is an illustration of a page displayed on the screen ofthe network appliances of each user of the system. The page shows the current ranking of all second challenges received in response to the ?rst challenge. [0049] FIG. 11 is an illustration ofa screen shot of what is displayed on the screen of the network appliance of the administrator ADMIN of the system. The ADMIN is being prompted to post a ?rst challenge. The ADMIN types a description ofthe challenge into text box 26 as illustrated, and then selects the “POST” button 27. This causes the challenge to be submitted to the system. The challenge in this example is the highest rated of the second challenges submitted by users in response to the ?rst challenge when the last challenge ended. The title ofthis challenge is: “HOW CAN THE COM PANY REDUCE THE POWER CONSUMPTION OF OUR PRODUCTS?” [0050] FIG. 12 is an illustration ofa screen shot of what is then displayed to the users A-F of the system. The challenge ofFIG. 11 is advertised to the users. The text box 28 presented prompts the user to type an idea into the text box 28. The user’s idea is a solution to the challenge proposed in FIG. 11. After the user has entered his or her idea (which is the object to be rated or “RO”), the user can then select the “SUBMIT” button 29 to submit the RO to the system. This differs from the initial challenge previously presented to the users in FIG. 7. FIG. 7 prompted the users to enter a second challenge in response to the ?rst challenge. [0051] The manner in which the rating system operates in rating and ranking the second challenges is similar to the operation ofthe system when rating and ranking ideas. In one novel aspect, each AR is multiplied by a weighting factor to determine a corresponding effective rating (ER) for second challenge. Rather than theARs ofchallenges being averaged to determine a ranking of the challenges, the ERs of chal lenges are averaged to determine a ranking ofchallenges. The ERs regarding the challenges or ideas submitted by a particu lar user are used to determine a quantity called the “reputa tion” RP of the user. The reputation of a user is therefore dependent upon what other users thought about challenges submitted by the user. Such a reputation RP is maintained for each user of the system. The weighting factor that is used to determine an ER from an AR is a function of the reputation RP of the user who submitted the AR. If the user who sub mitted the AR had a higher reputation (RP is larger) then the AR ofthe user is weighted more heavily, whereas ifthe user who submitted the AR had a lower reputation (RPT is smaller) then the AR of the user is weighted less heavily. Aug. 7, 2014 [0052] In a second novel aspect, the weighting factor used to determine an ER from an AR is also a function of a crowd voting probability value PT. The crowd voting probability value PT is a value that indicates the probability that the user who submitted the AR acts with the crowd in generatingARs. The crowd is the majority of a population that behaves in a similar fashion. The probability value PT is determined by applying the Bayes theorem rule and taking into account the number of positive and negative votes. If the user who gen erated the AR is determined to have a higher probability of following the other users or voting with the crowd (PT is closer to 1) then the AR is weighted more heavily, whereas if the user who generated the AR is determined to have a lower probability of following other users or voting with the crowd (PT is closer to 0) then the AR is weighted less heavily. [0053] In a third novel aspect, the weighting factor used to determine an ER from anAR is a function ofthe freshness RF ofthe AR. Ifthe AR is relatively old (RF is a large value) then theAR is weighed less heavily, whereas iftheAR is relatively fresh (RF is a small value) then the AR is weighed more heavily. [0054] In a fourth novel aspect, a decay value D is employed in determining a user’s reputation. One component ofthe user’ s reputation is an average ofERs submitted in the current computing cycle. A second component of the user’s reputation is a function ofa previously determined reputation RPT—1 for the user from the previous computing cycle. The component ofthe user’s reputation due to the priorreputation RPT—1 is discounted by the decay value D. If the user was relatively inactive and disengaged from the system then the decay value D is smaller (not equal to 1 but a little less, for example, D:0.998) and the impact ofthe user’s earlier repu tation RPT—1 is discounted more, whereas ifthe user is rela tively active and engaged with the system then the decay value D is larger (for example, D:1) and the impact of the user’s earlier reputation RP—1 is discounted less. As users submit ARs and ROs and use the system, the reputations of the users change. The network-based rating system is usable to solicit and receive challenges from a group ofusers, and to determine a ranking of the challenges to ?nd the challenge that is likely the best. A ranking of challenges in order ofthe highest average ofERs forthe challenge to the lowest average of ERs for the challenge is maintained and is displayed to users. At the end ofa challenge processing period, the highest rated challenge is determined and that challenge is submitted to users in an idea processing step. In other embodiments, challenges submitted by users in the challenge processing step can be submitted to users for ideas in the idea processing step on an ongoing and constant basis. In these alternative embodiments, the challenge rated in the idea processing step may be a challenge other than the highest rated challenge determined in the challenge processing step. In anothernovel embodiment, the users rating challenges in the challenge processing step are the same users rating ideas in the idea processing step. In yet other embodiments, the users rating challenges in the challenge processing step are not the same users rating ideas in the idea processing step. [0055] In this next section, the high level components of a crowdplatform will be disclosed: High-Level Environmental Inputs: [0056] Work?ow [0057] (Who Receives/Reviews Whatiie. Fusion) This would be the default work?ow that could change or emerge over time as the platform and the crowd evolve.
  • 16. US 2014/0222524 A1 [0058] Decision Makers & Points of Interaction on the Work?ow: [0059] 1) Identify the decision makers and their level of authority over certain topic areas, etc; 2) Identify what points throughout the process require a manual decision; 3) Identify whether the decision making roles can emerge from the crowd over time. [0060] Overall Budget [0061] (For Reporting & Crowdfunding) Allow depart ments to pledge certain budget amounts and how they would like it dispersed (crowdfunding, prize-based, rapid prototyp ing etc.) [0062] Roles/Permissions of Crowd [0063] (Level ofEmergent BehaviorAllowed) 1) The level ofauthority the crowd has over the direction or strategy ofthe platform (i.e. how the crowd shifts focus from a corporate strategic initiative to something else that may be unrelated); 2) Crowd’s role for projects under a certain dollar threshold and crowds role for projects over that dollar threshold. [0064] Crowdfunding Type (Standard or Enterprise) [0065] 1) Standard4Crowd pitches in; Enterprisei Crowd helps decide where pre-existing budget goes. [0066] Requirements to Validate Challenges [0067] (Graduation Thresholds) 1) proposed challenge becomes a solvable challenge (Work?ow); 2) Challenge Attributes. [0068] Requirements to Validate ldeas (Graduation Thresholds) 1) idea validation process; 2) inputs and ideas required for an idea; [0069] Requirements to Validate Implemented Solutions (Graduation Thresholds): How implemented solutions are validated i) quantitatively or ii) qualtitatively by the crowd. [0070] Points Trade-in Options (Tangible Bene?t for Par ticipating) [0071] For an explanation of how to make and use the “network based rating system” disclosed in the description above, see US. patent application Ser. No. 13/491,560, entitled “User Reputation ln Social Network And Ecom merce Rating Systems”, by Manas S. Hardas and Lisa S. Purvis, ?led Jun. 7, 2012. The subject matter of application Ser. No. 13/491 ,560 is incorporated herein by reference. [0072] Although certain speci?c embodiments are described above for instructional purposes, the teachings of this patent document have general applicability and are not limited to the speci?c embodiments described above. Although a rating scale involving ratings of —1 and +1 is used in the speci?c embodiment set forth above, otherrating scales can be used. Users may, for example, submit ratings on an integer scale offrom one to ten. The rating system need not be a system for rating ideas, but rather may be a system for rating suppliers ofproducts in an ecommerce application. The rating system may be a system for rating products such as in a consumer report type ofapplication. Although speci?c equa tions are set forth above for how to calculate a user’s reputa tion and for how to calculate an effective rating in one illus trative example, the novel general principles disclosed above regarding user reputations and effective ratings are not lim ited to these speci?c equations. Although in the speci?c embodiment set forth above a user is a person, the term user is not limited to a person but rather includes automatic agents. An example ofan automatic agent is a computer program like a web crawler that generates ROs and submits the ROs to the rating system. Accordingly, various modi?cations, adapta tions, and combinations of various features of the described Aug. 7, 2014 embodiments can be practiced without departing from the scope of the invention as set forth in the claims. What is claimed is: 1. A method comprising: (a) publishing a ?rst challenge to a plurality of users, wherein the ?rst challenge solicits the users to submit second challenges for rating; (b) storing in a database an initial reputation value for each of the users; (c) receiving a ?rst of the second challenges from a ?rst user; (d) receiving an actual rating for the ?rst of the second challenges from a second user; (e) determining an effective rating forthe ?rst ofthe second challenges that is a function of the initial reputation value for the second user and a probability that the second user followed other users in generating the actual rating; (f) determining an updated reputation value for the second user based on the effective rating; (g) replacing the initial reputation value forthe second user with the updated reputation value for the ?rst user in the database; (h) determining a ranking ofthe secondchallenges based in part on the effective rating for the ?rst of the second challenges; (i) determining which of the second challenges has the highest ranking; and (j) publishing the second challenge having the highest ranking to the plurality ofusers and soliciting ideas from the plurality ofusers to solve the second challenge hav ing the highest ranking. 2. The method of claim 1, wherein (a) through (j) are performed by a rating system. 3. The method of claim 1, wherein the determining of (f) involves multiplying the actual rating of (d) by a weighting factor, and wherein the weighting factor is a function ofother effective ratings, and wherein the other effective ratings are ratings for second challenges submitted by the second user. 4. The method of claim 1, wherein the determining of (f) involves averaging a plurality ofeffective ratings, wherein the effective ratings that are averaged are effective ratings for one or more challenges submitted by the ?rst user. 5. The method of claim 1, wherein the determining of (f) involves multiplying the actual rating of (d) by a weighting factor, and wherein the weighting factor is a function ofother effective ratings, and wherein the other effective ratings are ratings for rated objects submitted by the second user. 6. The method of claim 1, wherein the determining of (f) involves multiplying the actual rating of (d) by a weighting factor, and wherein the weighting factor is a function of a reputation value for the second user. 7. The method of claim 1, wherein the determining of (f) involves multiplying the actual rating of (d) by a weighting factor, and wherein the weighting factor is a function of a freshness of the actual rating. 8. The method of claim 1, wherein the determining of (f) involves multiplying the actual rating of (d) by a weighting factor, and wherein the weighting factor is a function of the probability that the second user acts with the crowd in gen erating actual ratings.
  • 17. US 2014/0222524 A1 9. The method of claim 1, wherein the probability that the second user acts with the crowd in generating actual ratings is a probability given a general sentiment about the challenge rated in (d). 10. The method ofclaim 1, wherein the determining ofthe updated reputation value of (d) involves multiplying prior reputation value for the ?rst user by a decay value. 11. The method ofclaim 1, further comprising the steps of: (k) receiving a ?rst idea from the ?rst user; (1) receiving an actual rating for the ?rst idea from the second user; (m) determining an effective rating for the ?rst idea that is a function of the initial reputation value for the second user and the probability that the second user followed other users in generating the actual rating; (n) determining an updatedreputationvalue forthe second user based on the effective rating of the ?rst idea; (0) replacing the initial reputation value for the second user with the updated reputation value of (n) for the second user in the database; and (p) determining a ranking of ideas based in part on the effective rating for the ?rst idea. 12. The method of claim 11, wherein (a) through (p) are performed by a rating system, and wherein the ranking of ideas determined in (p) is displayed by the rating system. 13. A method comprising: (a) storing a database of rating information, wherein the rating information includes a reputation value for a user of a network-based rating system; (b) receiving an actual rating of a challenge onto the net work-based rating system, wherein the actual rating ofa challenge is a rating of one of a plurality of rated chal lenges; (c) determining an effective rating based at least in part on the actual rating and the reputation value stored in the database; (d) adding the effective rating into the database; (e) determining a ranking of the plurality of rated chal lenges based at least in part on effective ratings stored in the database, wherein (a) through (e) are performed by the network-based rating system; and (f) publishing the challenge in (e) having the highest rank ing to a plurality of users and soliciting ideas from the plurality ofusers to solve the challenge having the high est ranking. 14. The method ofclaim 13, wherein the rating information stored in the database further includes a probability value, wherein the probability value indicates a probability that a user votes with a crowd when the user submits actual ratings, and wherein the determining in (e) of the effective rating is also based on the probability value. 15. The method ofclaim 13, wherein the determining of(e) involves multiplying the actual rating by a weighting factor, wherein the weighting factor is a function ofa probability that a user votes with the crowd when the user submits actual ratings. Aug. 7, 2014 16. The method ofclaim 13, wherein the determining of(e) involves multiplying the actual rating by a weighting factor, whereinthe weighting factor is a function ofa freshness ofthe actual rating. 17. The method ofclaim 13, wherein the determining of(e) involves multiplying the actual rating by a weighting factor, wherein the weighting factor is a function of the reputation value. 18. The method of claim 13, wherein the reputation value for the user was calculated by the network-based rating sys tem based at least in part on an average of effective ratings. 19. The method of claim 13, wherein the reputation value for the user was calculated by the network-based rating sys tem based at least in part on an average ofeffective ratings for challenges submitted by the user. 20. The method of claim 13, wherein the reputation value for the user was calculated by the network-based rating sys tem, and wherein the calculation of the reputation value involved multiplying a prior reputation value by a decay value. 21. The method of claim 13, wherein the network-based rating system determines a reputation value for each of the plurality of users, the method further comprising: (f) determining a ranking ofthe users based at least in part on the reputation values for the plurality ofusers. 22. A network-based rating system comprising: means for storing a database ofrating information, wherein the rating information includes a plurality of effective ratings, wherein each effective rating corresponds to an actual rating, wherein each actual rating is a rating ofone of a plurality of rated challenges, wherein one of the rated challenges was submitted by a ?rst user, and wherein the rated information further includes a plural ity of reputation values, wherein one of the reputation values is a reputation value for a second user; means for determining an effective rating corresponding to an actual rating, wherein the actual rating was submitted by the second user for the rated challenges submitted by the ?rst user, wherein the effective rating is: l) a function ofthe actual rating submitted by the second user, and 2) a function ofthe reputation value for the second user; means for determining and displaying the highest ranked challenge from a ranking of the plurality of rated chal lenges based at least in part on effective ratings stored in the database; and means for receiving actual ratings ofideas from a plurality of users. 23. The network-based rating system of claim 22, wherein the means for storing is a portion of a server that stores database information, and wherein the means for determining an effective rating, the means for determining and displaying the highest ranked challenge, and the means for receiving actual ratings of challenges are parts of a rating system pro gram executing on the server. * * * * *